Jan 31 00:55:35 np0005603541 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 00:55:35 np0005603541 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 00:55:35 np0005603541 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 00:55:35 np0005603541 kernel: BIOS-provided physical RAM map:
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 00:55:35 np0005603541 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 00:55:35 np0005603541 kernel: NX (Execute Disable) protection: active
Jan 31 00:55:35 np0005603541 kernel: APIC: Static calls initialized
Jan 31 00:55:35 np0005603541 kernel: SMBIOS 2.8 present.
Jan 31 00:55:35 np0005603541 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 00:55:35 np0005603541 kernel: Hypervisor detected: KVM
Jan 31 00:55:35 np0005603541 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 00:55:35 np0005603541 kernel: kvm-clock: using sched offset of 9391431274 cycles
Jan 31 00:55:35 np0005603541 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 00:55:35 np0005603541 kernel: tsc: Detected 2799.998 MHz processor
Jan 31 00:55:35 np0005603541 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 00:55:35 np0005603541 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 00:55:35 np0005603541 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 00:55:35 np0005603541 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 00:55:35 np0005603541 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 00:55:35 np0005603541 kernel: Using GB pages for direct mapping
Jan 31 00:55:35 np0005603541 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 00:55:35 np0005603541 kernel: ACPI: Early table checksum verification disabled
Jan 31 00:55:35 np0005603541 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 00:55:35 np0005603541 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 00:55:35 np0005603541 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 00:55:35 np0005603541 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 00:55:35 np0005603541 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 00:55:35 np0005603541 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 00:55:35 np0005603541 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 00:55:35 np0005603541 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 00:55:35 np0005603541 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 00:55:35 np0005603541 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 00:55:35 np0005603541 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 00:55:35 np0005603541 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 00:55:35 np0005603541 kernel: No NUMA configuration found
Jan 31 00:55:35 np0005603541 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 00:55:35 np0005603541 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 31 00:55:35 np0005603541 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 00:55:35 np0005603541 kernel: Zone ranges:
Jan 31 00:55:35 np0005603541 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 00:55:35 np0005603541 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 00:55:35 np0005603541 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 00:55:35 np0005603541 kernel:  Device   empty
Jan 31 00:55:35 np0005603541 kernel: Movable zone start for each node
Jan 31 00:55:35 np0005603541 kernel: Early memory node ranges
Jan 31 00:55:35 np0005603541 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 00:55:35 np0005603541 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 00:55:35 np0005603541 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 00:55:35 np0005603541 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 00:55:35 np0005603541 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 00:55:35 np0005603541 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 00:55:35 np0005603541 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 00:55:35 np0005603541 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 00:55:35 np0005603541 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 00:55:35 np0005603541 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 00:55:35 np0005603541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 00:55:35 np0005603541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 00:55:35 np0005603541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 00:55:35 np0005603541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 00:55:35 np0005603541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 00:55:35 np0005603541 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 00:55:35 np0005603541 kernel: TSC deadline timer available
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Max. logical packages:   8
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Max. logical dies:       8
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Max. dies per package:   1
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Max. threads per core:   1
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Num. cores per package:     1
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Num. threads per package:   1
Jan 31 00:55:35 np0005603541 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 00:55:35 np0005603541 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 00:55:35 np0005603541 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 00:55:35 np0005603541 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 00:55:35 np0005603541 kernel: Booting paravirtualized kernel on KVM
Jan 31 00:55:35 np0005603541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 00:55:35 np0005603541 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 00:55:35 np0005603541 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 00:55:35 np0005603541 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 00:55:35 np0005603541 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 00:55:35 np0005603541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 00:55:35 np0005603541 kernel: random: crng init done
Jan 31 00:55:35 np0005603541 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: Fallback order for Node 0: 0 
Jan 31 00:55:35 np0005603541 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 00:55:35 np0005603541 kernel: Policy zone: Normal
Jan 31 00:55:35 np0005603541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 00:55:35 np0005603541 kernel: software IO TLB: area num 8.
Jan 31 00:55:35 np0005603541 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 00:55:35 np0005603541 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 00:55:35 np0005603541 kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 00:55:35 np0005603541 kernel: Dynamic Preempt: voluntary
Jan 31 00:55:35 np0005603541 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 00:55:35 np0005603541 kernel: rcu: #011RCU event tracing is enabled.
Jan 31 00:55:35 np0005603541 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 00:55:35 np0005603541 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 31 00:55:35 np0005603541 kernel: #011Rude variant of Tasks RCU enabled.
Jan 31 00:55:35 np0005603541 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 31 00:55:35 np0005603541 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 00:55:35 np0005603541 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 00:55:35 np0005603541 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 00:55:35 np0005603541 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 00:55:35 np0005603541 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 00:55:35 np0005603541 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 00:55:35 np0005603541 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 00:55:35 np0005603541 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 00:55:35 np0005603541 kernel: Console: colour VGA+ 80x25
Jan 31 00:55:35 np0005603541 kernel: printk: console [ttyS0] enabled
Jan 31 00:55:35 np0005603541 kernel: ACPI: Core revision 20230331
Jan 31 00:55:35 np0005603541 kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 00:55:35 np0005603541 kernel: x2apic enabled
Jan 31 00:55:35 np0005603541 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 00:55:35 np0005603541 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 00:55:35 np0005603541 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 00:55:35 np0005603541 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 00:55:35 np0005603541 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 00:55:35 np0005603541 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 00:55:35 np0005603541 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 00:55:35 np0005603541 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 00:55:35 np0005603541 kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 00:55:35 np0005603541 kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 00:55:35 np0005603541 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 00:55:35 np0005603541 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 00:55:35 np0005603541 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 00:55:35 np0005603541 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 00:55:35 np0005603541 kernel: active return thunk: retbleed_return_thunk
Jan 31 00:55:35 np0005603541 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 00:55:35 np0005603541 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 00:55:35 np0005603541 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 00:55:35 np0005603541 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 00:55:35 np0005603541 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 00:55:35 np0005603541 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 00:55:35 np0005603541 kernel: Freeing SMP alternatives memory: 40K
Jan 31 00:55:35 np0005603541 kernel: pid_max: default: 32768 minimum: 301
Jan 31 00:55:35 np0005603541 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 00:55:35 np0005603541 kernel: landlock: Up and running.
Jan 31 00:55:35 np0005603541 kernel: Yama: becoming mindful.
Jan 31 00:55:35 np0005603541 kernel: SELinux:  Initializing.
Jan 31 00:55:35 np0005603541 kernel: LSM support for eBPF active
Jan 31 00:55:35 np0005603541 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 00:55:35 np0005603541 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 00:55:35 np0005603541 kernel: ... version:                0
Jan 31 00:55:35 np0005603541 kernel: ... bit width:              48
Jan 31 00:55:35 np0005603541 kernel: ... generic registers:      6
Jan 31 00:55:35 np0005603541 kernel: ... value mask:             0000ffffffffffff
Jan 31 00:55:35 np0005603541 kernel: ... max period:             00007fffffffffff
Jan 31 00:55:35 np0005603541 kernel: ... fixed-purpose events:   0
Jan 31 00:55:35 np0005603541 kernel: ... event mask:             000000000000003f
Jan 31 00:55:35 np0005603541 kernel: signal: max sigframe size: 1776
Jan 31 00:55:35 np0005603541 kernel: rcu: Hierarchical SRCU implementation.
Jan 31 00:55:35 np0005603541 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 31 00:55:35 np0005603541 kernel: smp: Bringing up secondary CPUs ...
Jan 31 00:55:35 np0005603541 kernel: smpboot: x86: Booting SMP configuration:
Jan 31 00:55:35 np0005603541 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 00:55:35 np0005603541 kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 00:55:35 np0005603541 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 00:55:35 np0005603541 kernel: node 0 deferred pages initialised in 9ms
Jan 31 00:55:35 np0005603541 kernel: Memory: 7763936K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 31 00:55:35 np0005603541 kernel: devtmpfs: initialized
Jan 31 00:55:35 np0005603541 kernel: x86/mm: Memory block size: 128MB
Jan 31 00:55:35 np0005603541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 00:55:35 np0005603541 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 00:55:35 np0005603541 kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 00:55:35 np0005603541 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 00:55:35 np0005603541 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 00:55:35 np0005603541 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 00:55:35 np0005603541 kernel: audit: initializing netlink subsys (disabled)
Jan 31 00:55:35 np0005603541 kernel: audit: type=2000 audit(1769838933.540:1): state=initialized audit_enabled=0 res=1
Jan 31 00:55:35 np0005603541 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 00:55:35 np0005603541 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 00:55:35 np0005603541 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 00:55:35 np0005603541 kernel: cpuidle: using governor menu
Jan 31 00:55:35 np0005603541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 00:55:35 np0005603541 kernel: PCI: Using configuration type 1 for base access
Jan 31 00:55:35 np0005603541 kernel: PCI: Using configuration type 1 for extended access
Jan 31 00:55:35 np0005603541 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 00:55:35 np0005603541 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 00:55:35 np0005603541 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 00:55:35 np0005603541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 00:55:35 np0005603541 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 00:55:35 np0005603541 kernel: Demotion targets for Node 0: null
Jan 31 00:55:35 np0005603541 kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 00:55:35 np0005603541 kernel: ACPI: Added _OSI(Module Device)
Jan 31 00:55:35 np0005603541 kernel: ACPI: Added _OSI(Processor Device)
Jan 31 00:55:35 np0005603541 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 00:55:35 np0005603541 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 00:55:35 np0005603541 kernel: ACPI: Interpreter enabled
Jan 31 00:55:35 np0005603541 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 00:55:35 np0005603541 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 00:55:35 np0005603541 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 00:55:35 np0005603541 kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 00:55:35 np0005603541 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 00:55:35 np0005603541 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [3] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [4] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [5] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [6] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [7] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [8] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [9] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [10] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [11] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [12] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [13] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [14] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [15] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [16] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [17] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [18] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [19] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [20] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [21] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [22] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [23] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [24] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [25] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [26] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [27] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [28] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [29] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [30] registered
Jan 31 00:55:35 np0005603541 kernel: acpiphp: Slot [31] registered
Jan 31 00:55:35 np0005603541 kernel: PCI host bridge to bus 0000:00
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 00:55:35 np0005603541 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 00:55:35 np0005603541 kernel: iommu: Default domain type: Translated
Jan 31 00:55:35 np0005603541 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 00:55:35 np0005603541 kernel: SCSI subsystem initialized
Jan 31 00:55:35 np0005603541 kernel: ACPI: bus type USB registered
Jan 31 00:55:35 np0005603541 kernel: usbcore: registered new interface driver usbfs
Jan 31 00:55:35 np0005603541 kernel: usbcore: registered new interface driver hub
Jan 31 00:55:35 np0005603541 kernel: usbcore: registered new device driver usb
Jan 31 00:55:35 np0005603541 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 00:55:35 np0005603541 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 00:55:35 np0005603541 kernel: PTP clock support registered
Jan 31 00:55:35 np0005603541 kernel: EDAC MC: Ver: 3.0.0
Jan 31 00:55:35 np0005603541 kernel: NetLabel: Initializing
Jan 31 00:55:35 np0005603541 kernel: NetLabel:  domain hash size = 128
Jan 31 00:55:35 np0005603541 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 00:55:35 np0005603541 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 00:55:35 np0005603541 kernel: PCI: Using ACPI for IRQ routing
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 00:55:35 np0005603541 kernel: vgaarb: loaded
Jan 31 00:55:35 np0005603541 kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 00:55:35 np0005603541 kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 00:55:35 np0005603541 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 00:55:35 np0005603541 kernel: pnp: PnP ACPI init
Jan 31 00:55:35 np0005603541 kernel: pnp: PnP ACPI: found 5 devices
Jan 31 00:55:35 np0005603541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_INET protocol family
Jan 31 00:55:35 np0005603541 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 00:55:35 np0005603541 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_XDP protocol family
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 00:55:35 np0005603541 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 00:55:35 np0005603541 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 00:55:35 np0005603541 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 27940 usecs
Jan 31 00:55:35 np0005603541 kernel: PCI: CLS 0 bytes, default 64
Jan 31 00:55:35 np0005603541 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 00:55:35 np0005603541 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 00:55:35 np0005603541 kernel: ACPI: bus type thunderbolt registered
Jan 31 00:55:35 np0005603541 kernel: Trying to unpack rootfs image as initramfs...
Jan 31 00:55:35 np0005603541 kernel: Initialise system trusted keyrings
Jan 31 00:55:35 np0005603541 kernel: Key type blacklist registered
Jan 31 00:55:35 np0005603541 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 00:55:35 np0005603541 kernel: zbud: loaded
Jan 31 00:55:35 np0005603541 kernel: integrity: Platform Keyring initialized
Jan 31 00:55:35 np0005603541 kernel: integrity: Machine keyring initialized
Jan 31 00:55:35 np0005603541 kernel: Freeing initrd memory: 88000K
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_ALG protocol family
Jan 31 00:55:35 np0005603541 kernel: xor: automatically using best checksumming function   avx       
Jan 31 00:55:35 np0005603541 kernel: Key type asymmetric registered
Jan 31 00:55:35 np0005603541 kernel: Asymmetric key parser 'x509' registered
Jan 31 00:55:35 np0005603541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 00:55:35 np0005603541 kernel: io scheduler mq-deadline registered
Jan 31 00:55:35 np0005603541 kernel: io scheduler kyber registered
Jan 31 00:55:35 np0005603541 kernel: io scheduler bfq registered
Jan 31 00:55:35 np0005603541 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 00:55:35 np0005603541 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 00:55:35 np0005603541 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 00:55:35 np0005603541 kernel: ACPI: button: Power Button [PWRF]
Jan 31 00:55:35 np0005603541 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 00:55:35 np0005603541 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 00:55:35 np0005603541 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 00:55:35 np0005603541 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 00:55:35 np0005603541 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 00:55:35 np0005603541 kernel: Non-volatile memory driver v1.3
Jan 31 00:55:35 np0005603541 kernel: rdac: device handler registered
Jan 31 00:55:35 np0005603541 kernel: hp_sw: device handler registered
Jan 31 00:55:35 np0005603541 kernel: emc: device handler registered
Jan 31 00:55:35 np0005603541 kernel: alua: device handler registered
Jan 31 00:55:35 np0005603541 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 00:55:35 np0005603541 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 00:55:35 np0005603541 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 00:55:35 np0005603541 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 00:55:35 np0005603541 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 00:55:35 np0005603541 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 00:55:35 np0005603541 kernel: usb usb1: Product: UHCI Host Controller
Jan 31 00:55:35 np0005603541 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 00:55:35 np0005603541 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 00:55:35 np0005603541 kernel: hub 1-0:1.0: USB hub found
Jan 31 00:55:35 np0005603541 kernel: hub 1-0:1.0: 2 ports detected
Jan 31 00:55:35 np0005603541 kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 00:55:35 np0005603541 kernel: usbserial: USB Serial support registered for generic
Jan 31 00:55:35 np0005603541 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 00:55:35 np0005603541 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 00:55:35 np0005603541 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 00:55:35 np0005603541 kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 00:55:35 np0005603541 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 00:55:35 np0005603541 kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 00:55:35 np0005603541 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 00:55:35 np0005603541 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T05:55:34 UTC (1769838934)
Jan 31 00:55:35 np0005603541 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 00:55:35 np0005603541 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 00:55:35 np0005603541 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 00:55:35 np0005603541 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 00:55:35 np0005603541 kernel: usbcore: registered new interface driver usbhid
Jan 31 00:55:35 np0005603541 kernel: usbhid: USB HID core driver
Jan 31 00:55:35 np0005603541 kernel: drop_monitor: Initializing network drop monitor service
Jan 31 00:55:35 np0005603541 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 00:55:35 np0005603541 kernel: Initializing XFRM netlink socket
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_INET6 protocol family
Jan 31 00:55:35 np0005603541 kernel: Segment Routing with IPv6
Jan 31 00:55:35 np0005603541 kernel: NET: Registered PF_PACKET protocol family
Jan 31 00:55:35 np0005603541 kernel: mpls_gso: MPLS GSO support
Jan 31 00:55:35 np0005603541 kernel: IPI shorthand broadcast: enabled
Jan 31 00:55:35 np0005603541 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 00:55:35 np0005603541 kernel: AES CTR mode by8 optimization enabled
Jan 31 00:55:35 np0005603541 kernel: sched_clock: Marking stable (937004930, 150083567)->(1188095159, -101006662)
Jan 31 00:55:35 np0005603541 kernel: registered taskstats version 1
Jan 31 00:55:35 np0005603541 kernel: Loading compiled-in X.509 certificates
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 00:55:35 np0005603541 kernel: Demotion targets for Node 0: null
Jan 31 00:55:35 np0005603541 kernel: page_owner is disabled
Jan 31 00:55:35 np0005603541 kernel: Key type .fscrypt registered
Jan 31 00:55:35 np0005603541 kernel: Key type fscrypt-provisioning registered
Jan 31 00:55:35 np0005603541 kernel: Key type big_key registered
Jan 31 00:55:35 np0005603541 kernel: Key type encrypted registered
Jan 31 00:55:35 np0005603541 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 00:55:35 np0005603541 kernel: Loading compiled-in module X.509 certificates
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 00:55:35 np0005603541 kernel: ima: Allocated hash algorithm: sha256
Jan 31 00:55:35 np0005603541 kernel: ima: No architecture policies found
Jan 31 00:55:35 np0005603541 kernel: evm: Initialising EVM extended attributes:
Jan 31 00:55:35 np0005603541 kernel: evm: security.selinux
Jan 31 00:55:35 np0005603541 kernel: evm: security.SMACK64 (disabled)
Jan 31 00:55:35 np0005603541 kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 00:55:35 np0005603541 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 00:55:35 np0005603541 kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 00:55:35 np0005603541 kernel: evm: security.apparmor (disabled)
Jan 31 00:55:35 np0005603541 kernel: evm: security.ima
Jan 31 00:55:35 np0005603541 kernel: evm: security.capability
Jan 31 00:55:35 np0005603541 kernel: evm: HMAC attrs: 0x1
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 00:55:35 np0005603541 kernel: Running certificate verification RSA selftest
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 00:55:35 np0005603541 kernel: Running certificate verification ECDSA selftest
Jan 31 00:55:35 np0005603541 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 00:55:35 np0005603541 kernel: clk: Disabling unused clocks
Jan 31 00:55:35 np0005603541 kernel: Freeing unused decrypted memory: 2028K
Jan 31 00:55:35 np0005603541 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 00:55:35 np0005603541 kernel: Write protecting the kernel read-only data: 30720k
Jan 31 00:55:35 np0005603541 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 00:55:35 np0005603541 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 00:55:35 np0005603541 kernel: Run /init as init process
Jan 31 00:55:35 np0005603541 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 00:55:35 np0005603541 systemd: Detected virtualization kvm.
Jan 31 00:55:35 np0005603541 systemd: Detected architecture x86-64.
Jan 31 00:55:35 np0005603541 systemd: Running in initrd.
Jan 31 00:55:35 np0005603541 systemd: No hostname configured, using default hostname.
Jan 31 00:55:35 np0005603541 systemd: Hostname set to <localhost>.
Jan 31 00:55:35 np0005603541 systemd: Initializing machine ID from VM UUID.
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: Manufacturer: QEMU
Jan 31 00:55:35 np0005603541 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 00:55:35 np0005603541 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 00:55:35 np0005603541 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 00:55:35 np0005603541 systemd: Queued start job for default target Initrd Default Target.
Jan 31 00:55:35 np0005603541 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 00:55:35 np0005603541 systemd: Reached target Local Encrypted Volumes.
Jan 31 00:55:35 np0005603541 systemd: Reached target Initrd /usr File System.
Jan 31 00:55:35 np0005603541 systemd: Reached target Local File Systems.
Jan 31 00:55:35 np0005603541 systemd: Reached target Path Units.
Jan 31 00:55:35 np0005603541 systemd: Reached target Slice Units.
Jan 31 00:55:35 np0005603541 systemd: Reached target Swaps.
Jan 31 00:55:35 np0005603541 systemd: Reached target Timer Units.
Jan 31 00:55:35 np0005603541 systemd: Listening on D-Bus System Message Bus Socket.
Jan 31 00:55:35 np0005603541 systemd: Listening on Journal Socket (/dev/log).
Jan 31 00:55:35 np0005603541 systemd: Listening on Journal Socket.
Jan 31 00:55:35 np0005603541 systemd: Listening on udev Control Socket.
Jan 31 00:55:35 np0005603541 systemd: Listening on udev Kernel Socket.
Jan 31 00:55:35 np0005603541 systemd: Reached target Socket Units.
Jan 31 00:55:35 np0005603541 systemd: Starting Create List of Static Device Nodes...
Jan 31 00:55:35 np0005603541 systemd: Starting Journal Service...
Jan 31 00:55:35 np0005603541 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 00:55:35 np0005603541 systemd: Starting Apply Kernel Variables...
Jan 31 00:55:35 np0005603541 systemd: Starting Create System Users...
Jan 31 00:55:35 np0005603541 systemd: Starting Setup Virtual Console...
Jan 31 00:55:35 np0005603541 systemd: Finished Create List of Static Device Nodes.
Jan 31 00:55:35 np0005603541 systemd: Finished Apply Kernel Variables.
Jan 31 00:55:35 np0005603541 systemd-journald[306]: Journal started
Jan 31 00:55:35 np0005603541 systemd-journald[306]: Runtime Journal (/run/log/journal/447bf06aa3b247e0813a295d0298e0f3) is 8.0M, max 153.6M, 145.6M free.
Jan 31 00:55:35 np0005603541 systemd: Started Journal Service.
Jan 31 00:55:35 np0005603541 systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 31 00:55:35 np0005603541 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 31 00:55:35 np0005603541 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 00:55:35 np0005603541 systemd[1]: Finished Create System Users.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 00:55:35 np0005603541 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 00:55:35 np0005603541 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 00:55:35 np0005603541 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 00:55:35 np0005603541 systemd[1]: Finished Setup Virtual Console.
Jan 31 00:55:35 np0005603541 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting dracut cmdline hook...
Jan 31 00:55:35 np0005603541 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 00:55:35 np0005603541 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 00:55:35 np0005603541 systemd[1]: Finished dracut cmdline hook.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting dracut pre-udev hook...
Jan 31 00:55:35 np0005603541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 00:55:35 np0005603541 kernel: device-mapper: uevent: version 1.0.3
Jan 31 00:55:35 np0005603541 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 00:55:35 np0005603541 kernel: RPC: Registered named UNIX socket transport module.
Jan 31 00:55:35 np0005603541 kernel: RPC: Registered udp transport module.
Jan 31 00:55:35 np0005603541 kernel: RPC: Registered tcp transport module.
Jan 31 00:55:35 np0005603541 kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 00:55:35 np0005603541 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 00:55:35 np0005603541 rpc.statd[440]: Version 2.5.4 starting
Jan 31 00:55:35 np0005603541 rpc.statd[440]: Initializing NSM state
Jan 31 00:55:35 np0005603541 rpc.idmapd[445]: Setting log level to 0
Jan 31 00:55:35 np0005603541 systemd[1]: Finished dracut pre-udev hook.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 00:55:35 np0005603541 systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 00:55:35 np0005603541 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting dracut pre-trigger hook...
Jan 31 00:55:35 np0005603541 systemd[1]: Finished dracut pre-trigger hook.
Jan 31 00:55:35 np0005603541 systemd[1]: Starting Coldplug All udev Devices...
Jan 31 00:55:35 np0005603541 systemd[1]: Created slice Slice /system/modprobe.
Jan 31 00:55:36 np0005603541 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 00:55:36 np0005603541 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 00:55:36 np0005603541 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 00:55:36 np0005603541 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 00:55:36 np0005603541 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Network.
Jan 31 00:55:36 np0005603541 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 00:55:36 np0005603541 systemd[1]: Starting dracut initqueue hook...
Jan 31 00:55:36 np0005603541 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:55:36 np0005603541 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 00:55:36 np0005603541 kernel: scsi host0: ata_piix
Jan 31 00:55:36 np0005603541 kernel: scsi host1: ata_piix
Jan 31 00:55:36 np0005603541 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 00:55:36 np0005603541 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 00:55:36 np0005603541 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 00:55:36 np0005603541 systemd[1]: Mounting Kernel Configuration File System...
Jan 31 00:55:36 np0005603541 kernel: vda: vda1
Jan 31 00:55:36 np0005603541 systemd[1]: Mounted Kernel Configuration File System.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target System Initialization.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Basic System.
Jan 31 00:55:36 np0005603541 kernel: ata1: found unknown device (class 0)
Jan 31 00:55:36 np0005603541 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 00:55:36 np0005603541 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 00:55:36 np0005603541 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 00:55:36 np0005603541 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 00:55:36 np0005603541 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 00:55:36 np0005603541 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Initrd Root Device.
Jan 31 00:55:36 np0005603541 systemd[1]: Finished dracut initqueue hook.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 00:55:36 np0005603541 systemd[1]: Reached target Remote File Systems.
Jan 31 00:55:36 np0005603541 systemd[1]: Starting dracut pre-mount hook...
Jan 31 00:55:36 np0005603541 systemd[1]: Finished dracut pre-mount hook.
Jan 31 00:55:36 np0005603541 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 00:55:36 np0005603541 systemd-fsck[551]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 00:55:36 np0005603541 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 00:55:36 np0005603541 systemd[1]: Mounting /sysroot...
Jan 31 00:55:37 np0005603541 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 00:55:37 np0005603541 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 00:56:01 np0005603541 kernel: XFS (vda1): Ending clean mount
Jan 31 00:57:06 np0005603541 systemd[1]: sysroot.mount: Mounting timed out. Terminating.
Jan 31 00:57:07 np0005603541 systemd[1]: sysroot.mount: Mount process exited, code=killed, status=15/TERM
Jan 31 00:57:07 np0005603541 systemd[1]: Mounted /sysroot.
Jan 31 00:57:07 np0005603541 systemd[1]: Reached target Initrd Root File System.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 00:57:07 np0005603541 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 00:57:07 np0005603541 systemd[1]: Reached target Initrd File Systems.
Jan 31 00:57:07 np0005603541 systemd[1]: Reached target Initrd Default Target.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting dracut mount hook...
Jan 31 00:57:07 np0005603541 systemd[1]: Finished dracut mount hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 00:57:07 np0005603541 rpc.idmapd[445]: exiting on signal 15
Jan 31 00:57:07 np0005603541 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Network.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Timer Units.
Jan 31 00:57:07 np0005603541 systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Initrd Default Target.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Basic System.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Initrd Root Device.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Initrd /usr File System.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Path Units.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Remote File Systems.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Slice Units.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Socket Units.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target System Initialization.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Local File Systems.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Swaps.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut mount hook.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut pre-mount hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut initqueue hook.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Setup Virtual Console.
Jan 31 00:57:07 np0005603541 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Closed udev Control Socket.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Closed udev Kernel Socket.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut pre-udev hook.
Jan 31 00:57:07 np0005603541 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped dracut cmdline hook.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting Cleanup udev Database...
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 00:57:07 np0005603541 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 00:57:07 np0005603541 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Stopped Create System Users.
Jan 31 00:57:07 np0005603541 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 00:57:07 np0005603541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 00:57:07 np0005603541 systemd[1]: Finished Cleanup udev Database.
Jan 31 00:57:07 np0005603541 systemd[1]: Reached target Switch Root.
Jan 31 00:57:07 np0005603541 systemd[1]: Starting Switch Root...
Jan 31 00:57:07 np0005603541 systemd[1]: Switching root.
Jan 31 00:57:07 np0005603541 systemd-journald[306]: Journal stopped
Jan 31 00:57:09 np0005603541 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 31 00:57:09 np0005603541 kernel: audit: type=1404 audit(1769839028.133:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 00:57:09 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 00:57:09 np0005603541 kernel: audit: type=1403 audit(1769839028.320:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 00:57:09 np0005603541 systemd: Successfully loaded SELinux policy in 193.073ms.
Jan 31 00:57:09 np0005603541 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 64.095ms.
Jan 31 00:57:09 np0005603541 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 00:57:09 np0005603541 systemd: Detected virtualization kvm.
Jan 31 00:57:09 np0005603541 systemd: Detected architecture x86-64.
Jan 31 00:57:09 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 00:57:09 np0005603541 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd: Stopped Switch Root.
Jan 31 00:57:09 np0005603541 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 00:57:09 np0005603541 systemd: Created slice Slice /system/getty.
Jan 31 00:57:09 np0005603541 systemd: Created slice Slice /system/serial-getty.
Jan 31 00:57:09 np0005603541 systemd: Created slice Slice /system/sshd-keygen.
Jan 31 00:57:09 np0005603541 systemd: Created slice User and Session Slice.
Jan 31 00:57:09 np0005603541 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 00:57:09 np0005603541 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 31 00:57:09 np0005603541 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 00:57:09 np0005603541 systemd: Reached target Local Encrypted Volumes.
Jan 31 00:57:09 np0005603541 systemd: Stopped target Switch Root.
Jan 31 00:57:09 np0005603541 systemd: Stopped target Initrd File Systems.
Jan 31 00:57:09 np0005603541 systemd: Stopped target Initrd Root File System.
Jan 31 00:57:09 np0005603541 systemd: Reached target Local Integrity Protected Volumes.
Jan 31 00:57:09 np0005603541 systemd: Reached target Path Units.
Jan 31 00:57:09 np0005603541 systemd: Reached target rpc_pipefs.target.
Jan 31 00:57:09 np0005603541 systemd: Reached target Slice Units.
Jan 31 00:57:09 np0005603541 systemd: Reached target Swaps.
Jan 31 00:57:09 np0005603541 systemd: Reached target Local Verity Protected Volumes.
Jan 31 00:57:09 np0005603541 systemd: Listening on RPCbind Server Activation Socket.
Jan 31 00:57:09 np0005603541 systemd: Reached target RPC Port Mapper.
Jan 31 00:57:09 np0005603541 systemd: Listening on Process Core Dump Socket.
Jan 31 00:57:09 np0005603541 systemd: Listening on initctl Compatibility Named Pipe.
Jan 31 00:57:09 np0005603541 systemd: Listening on udev Control Socket.
Jan 31 00:57:09 np0005603541 systemd: Listening on udev Kernel Socket.
Jan 31 00:57:09 np0005603541 systemd: Mounting Huge Pages File System...
Jan 31 00:57:09 np0005603541 systemd: Mounting POSIX Message Queue File System...
Jan 31 00:57:09 np0005603541 systemd: Mounting Kernel Debug File System...
Jan 31 00:57:09 np0005603541 systemd: Mounting Kernel Trace File System...
Jan 31 00:57:09 np0005603541 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 00:57:09 np0005603541 systemd: Starting Create List of Static Device Nodes...
Jan 31 00:57:09 np0005603541 systemd: Starting Load Kernel Module configfs...
Jan 31 00:57:09 np0005603541 systemd: Starting Load Kernel Module drm...
Jan 31 00:57:09 np0005603541 systemd: Starting Load Kernel Module efi_pstore...
Jan 31 00:57:09 np0005603541 systemd: Starting Load Kernel Module fuse...
Jan 31 00:57:09 np0005603541 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 00:57:09 np0005603541 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd: Stopped File System Check on Root Device.
Jan 31 00:57:09 np0005603541 systemd: Stopped Journal Service.
Jan 31 00:57:09 np0005603541 kernel: fuse: init (API version 7.37)
Jan 31 00:57:09 np0005603541 systemd: Starting Journal Service...
Jan 31 00:57:09 np0005603541 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 00:57:09 np0005603541 systemd: Starting Generate network units from Kernel command line...
Jan 31 00:57:09 np0005603541 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 00:57:09 np0005603541 systemd: Starting Remount Root and Kernel File Systems...
Jan 31 00:57:09 np0005603541 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 00:57:09 np0005603541 systemd: Starting Apply Kernel Variables...
Jan 31 00:57:09 np0005603541 systemd-journald[674]: Journal started
Jan 31 00:57:09 np0005603541 systemd-journald[674]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 00:57:09 np0005603541 systemd[1]: Queued start job for default target Multi-User System.
Jan 31 00:57:09 np0005603541 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd: Starting Coldplug All udev Devices...
Jan 31 00:57:09 np0005603541 systemd: Started Journal Service.
Jan 31 00:57:09 np0005603541 systemd[1]: Mounted Huge Pages File System.
Jan 31 00:57:09 np0005603541 systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 00:57:09 np0005603541 systemd[1]: Mounted Kernel Debug File System.
Jan 31 00:57:09 np0005603541 systemd[1]: Mounted Kernel Trace File System.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 00:57:09 np0005603541 kernel: ACPI: bus type drm_connector registered
Jan 31 00:57:09 np0005603541 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 00:57:09 np0005603541 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Load Kernel Module drm.
Jan 31 00:57:09 np0005603541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 00:57:09 np0005603541 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Load Kernel Module fuse.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 00:57:09 np0005603541 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 00:57:09 np0005603541 systemd[1]: Mounting FUSE Control File System...
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 00:57:09 np0005603541 systemd[1]: Mounted FUSE Control File System.
Jan 31 00:57:09 np0005603541 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Rebuild Hardware Database...
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 00:57:09 np0005603541 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Create System Users...
Jan 31 00:57:09 np0005603541 systemd-journald[674]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 00:57:09 np0005603541 systemd-journald[674]: Received client request to flush runtime journal.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Apply Kernel Variables.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Create System Users.
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 00:57:09 np0005603541 systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 00:57:09 np0005603541 systemd[1]: Reached target Local File Systems.
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 00:57:09 np0005603541 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 00:57:09 np0005603541 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 00:57:09 np0005603541 bootctl[691]: Couldn't find EFI system partition, skipping.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 00:57:09 np0005603541 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 00:57:09 np0005603541 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 00:57:09 np0005603541 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Security Auditing Service...
Jan 31 00:57:09 np0005603541 systemd[1]: Starting RPC Bind...
Jan 31 00:57:09 np0005603541 systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 00:57:09 np0005603541 auditd[697]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 00:57:09 np0005603541 auditd[697]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 00:57:09 np0005603541 systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 00:57:09 np0005603541 systemd[1]: Started RPC Bind.
Jan 31 00:57:09 np0005603541 augenrules[702]: /sbin/augenrules: No change
Jan 31 00:57:10 np0005603541 augenrules[717]: No rules
Jan 31 00:57:10 np0005603541 augenrules[717]: enabled 1
Jan 31 00:57:10 np0005603541 augenrules[717]: failure 1
Jan 31 00:57:10 np0005603541 augenrules[717]: pid 697
Jan 31 00:57:10 np0005603541 augenrules[717]: rate_limit 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_limit 8192
Jan 31 00:57:10 np0005603541 augenrules[717]: lost 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog 4
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time 60000
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time_actual 0
Jan 31 00:57:10 np0005603541 augenrules[717]: enabled 1
Jan 31 00:57:10 np0005603541 augenrules[717]: failure 1
Jan 31 00:57:10 np0005603541 augenrules[717]: pid 697
Jan 31 00:57:10 np0005603541 augenrules[717]: rate_limit 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_limit 8192
Jan 31 00:57:10 np0005603541 augenrules[717]: lost 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog 3
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time 60000
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time_actual 0
Jan 31 00:57:10 np0005603541 augenrules[717]: enabled 1
Jan 31 00:57:10 np0005603541 augenrules[717]: failure 1
Jan 31 00:57:10 np0005603541 augenrules[717]: pid 697
Jan 31 00:57:10 np0005603541 augenrules[717]: rate_limit 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_limit 8192
Jan 31 00:57:10 np0005603541 augenrules[717]: lost 0
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog 3
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time 60000
Jan 31 00:57:10 np0005603541 augenrules[717]: backlog_wait_time_actual 0
Jan 31 00:57:10 np0005603541 systemd[1]: Started Security Auditing Service.
Jan 31 00:57:10 np0005603541 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 00:57:10 np0005603541 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 00:57:10 np0005603541 systemd[1]: Finished Rebuild Hardware Database.
Jan 31 00:57:10 np0005603541 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 00:57:10 np0005603541 systemd-udevd[725]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 00:57:10 np0005603541 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 00:57:10 np0005603541 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 00:57:10 np0005603541 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 00:57:10 np0005603541 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 00:57:10 np0005603541 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 00:57:10 np0005603541 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 00:57:10 np0005603541 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 00:57:10 np0005603541 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 00:57:10 np0005603541 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 00:57:10 np0005603541 systemd-udevd[757]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:57:10 np0005603541 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 00:57:10 np0005603541 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 00:57:10 np0005603541 kernel: Console: switching to colour dummy device 80x25
Jan 31 00:57:10 np0005603541 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 00:57:10 np0005603541 kernel: [drm] features: -context_init
Jan 31 00:57:10 np0005603541 kernel: [drm] number of scanouts: 1
Jan 31 00:57:10 np0005603541 kernel: [drm] number of cap sets: 0
Jan 31 00:57:10 np0005603541 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 00:57:10 np0005603541 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 00:57:10 np0005603541 kernel: Console: switching to colour frame buffer device 128x48
Jan 31 00:57:10 np0005603541 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 00:57:10 np0005603541 kernel: kvm_amd: TSC scaling supported
Jan 31 00:57:10 np0005603541 kernel: kvm_amd: Nested Virtualization enabled
Jan 31 00:57:10 np0005603541 kernel: kvm_amd: Nested Paging enabled
Jan 31 00:57:10 np0005603541 kernel: kvm_amd: LBR virtualization supported
Jan 31 00:57:11 np0005603541 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 00:57:11 np0005603541 systemd[1]: Starting Update is Completed...
Jan 31 00:57:11 np0005603541 systemd[1]: Finished Update is Completed.
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target System Initialization.
Jan 31 00:57:11 np0005603541 systemd[1]: Started dnf makecache --timer.
Jan 31 00:57:11 np0005603541 systemd[1]: Started Daily rotation of log files.
Jan 31 00:57:11 np0005603541 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target Timer Units.
Jan 31 00:57:11 np0005603541 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 00:57:11 np0005603541 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target Socket Units.
Jan 31 00:57:11 np0005603541 systemd[1]: Starting D-Bus System Message Bus...
Jan 31 00:57:11 np0005603541 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 00:57:11 np0005603541 systemd[1]: Started D-Bus System Message Bus.
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target Basic System.
Jan 31 00:57:11 np0005603541 dbus-broker-lau[807]: Ready
Jan 31 00:57:11 np0005603541 systemd[1]: Starting NTP client/server...
Jan 31 00:57:11 np0005603541 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 00:57:11 np0005603541 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 00:57:11 np0005603541 systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 00:57:11 np0005603541 systemd[1]: Started irqbalance daemon.
Jan 31 00:57:11 np0005603541 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 00:57:11 np0005603541 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 00:57:11 np0005603541 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 00:57:11 np0005603541 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target sshd-keygen.target.
Jan 31 00:57:11 np0005603541 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 00:57:11 np0005603541 systemd[1]: Reached target User and Group Name Lookups.
Jan 31 00:57:11 np0005603541 systemd[1]: Starting User Login Management...
Jan 31 00:57:11 np0005603541 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 00:57:11 np0005603541 systemd-logind[817]: New seat seat0.
Jan 31 00:57:11 np0005603541 systemd-logind[817]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 00:57:11 np0005603541 systemd-logind[817]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 00:57:11 np0005603541 systemd[1]: Started User Login Management.
Jan 31 00:57:11 np0005603541 chronyd[826]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 00:57:11 np0005603541 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 00:57:11 np0005603541 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 00:57:11 np0005603541 chronyd[826]: Loaded 0 symmetric keys
Jan 31 00:57:11 np0005603541 chronyd[826]: Using right/UTC timezone to obtain leap second data
Jan 31 00:57:11 np0005603541 chronyd[826]: Loaded seccomp filter (level 2)
Jan 31 00:57:11 np0005603541 systemd[1]: Started NTP client/server.
Jan 31 00:57:11 np0005603541 iptables.init[812]: iptables: Applying firewall rules: [  OK  ]
Jan 31 00:57:11 np0005603541 systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 00:57:13 np0005603541 cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 05:57:13 +0000. Up 99.80 seconds.
Jan 31 00:57:13 np0005603541 systemd[1]: run-cloud\x2dinit-tmp-tmpl993muir.mount: Deactivated successfully.
Jan 31 00:57:13 np0005603541 systemd[1]: Starting Hostname Service...
Jan 31 00:57:13 np0005603541 systemd[1]: Started Hostname Service.
Jan 31 00:57:13 np0005603541 systemd-hostnamed[850]: Hostname set to <np0005603541.novalocal> (static)
Jan 31 00:57:14 np0005603541 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 00:57:14 np0005603541 systemd[1]: Reached target Preparation for Network.
Jan 31 00:57:14 np0005603541 systemd[1]: Starting Network Manager...
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3308] NetworkManager (version 1.54.3-2.el9) is starting... (boot:991be50c-1b19-4795-a191-f9fb0ceb117c)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3312] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3580] manager[0x5601b5303000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3698] hostname: hostname: using hostnamed
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3698] hostname: static hostname changed from (none) to "np0005603541.novalocal"
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3702] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3875] manager[0x5601b5303000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.3875] manager[0x5601b5303000]: rfkill: WWAN hardware radio set enabled
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4088] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4088] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4089] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4089] manager: Networking is enabled by state file
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4091] settings: Loaded settings plugin: keyfile (internal)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4209] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 00:57:14 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4244] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 00:57:14 np0005603541 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4294] dhcp: init: Using DHCP client 'internal'
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4299] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4309] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4319] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4351] device (lo): Activation: starting connection 'lo' (6a956e3f-91e5-480d-b46c-6c22e1e7ca7a)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4358] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4360] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4374] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4377] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4378] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4379] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4380] device (eth0): carrier: link connected
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4382] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 systemd[1]: Started Network Manager.
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4385] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4391] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4394] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4395] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4396] manager: NetworkManager state is now CONNECTING
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4397] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4402] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4404] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 00:57:14 np0005603541 systemd[1]: Reached target Network.
Jan 31 00:57:14 np0005603541 systemd[1]: Starting Network Manager Wait Online...
Jan 31 00:57:14 np0005603541 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 00:57:14 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4676] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4679] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 00:57:14 np0005603541 NetworkManager[854]: <info>  [1769839034.4683] device (lo): Activation: successful, device activated.
Jan 31 00:57:14 np0005603541 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 00:57:14 np0005603541 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 00:57:14 np0005603541 systemd[1]: Reached target NFS client services.
Jan 31 00:57:14 np0005603541 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 00:57:14 np0005603541 systemd[1]: Reached target Remote File Systems.
Jan 31 00:57:14 np0005603541 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0243] dhcp4 (eth0): state changed new lease, address=38.102.83.251
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0255] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0275] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0308] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0309] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0312] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0314] device (eth0): Activation: successful, device activated.
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0318] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 00:57:16 np0005603541 NetworkManager[854]: <info>  [1769839036.0321] manager: startup complete
Jan 31 00:57:16 np0005603541 systemd[1]: Finished Network Manager Wait Online.
Jan 31 00:57:16 np0005603541 systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 00:57:16 np0005603541 cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 05:57:16 +0000. Up 102.69 seconds.
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.251         | 255.255.255.0 | global | fa:16:3e:93:1f:0e |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe93:1f0e/64 |       .       |  link  | fa:16:3e:93:1f:0e |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 00:57:16 np0005603541 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 00:57:20 np0005603541 chronyd[826]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 31 00:57:20 np0005603541 chronyd[826]: System clock TAI offset set to 37 seconds
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 35 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 33 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 31 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 28 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 34 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 32 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 30 affinity is now unmanaged
Jan 31 00:57:21 np0005603541 irqbalance[813]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 00:57:21 np0005603541 irqbalance[813]: IRQ 29 affinity is now unmanaged
Jan 31 00:57:22 np0005603541 cloud-init[921]: Generating public/private rsa key pair.
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key fingerprint is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: SHA256:BEk9LBQz3kvvVFrXc04LeS+CtTF6IOgj2AF+5TLq2+g root@np0005603541.novalocal
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key's randomart image is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: +---[RSA 3072]----+
Jan 31 00:57:22 np0005603541 cloud-init[921]: |  .  oO=         |
Jan 31 00:57:22 np0005603541 cloud-init[921]: | . . +o*+     .. |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |  . = +.=.. *o.o+|
Jan 31 00:57:22 np0005603541 cloud-init[921]: |   = = o + O =oo=|
Jan 31 00:57:22 np0005603541 cloud-init[921]: |  o o o S * + ..o|
Jan 31 00:57:22 np0005603541 cloud-init[921]: | .   . . o . . . |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |  .       .      |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |   +             |
Jan 31 00:57:22 np0005603541 cloud-init[921]: | .E .            |
Jan 31 00:57:22 np0005603541 cloud-init[921]: +----[SHA256]-----+
Jan 31 00:57:22 np0005603541 cloud-init[921]: Generating public/private ecdsa key pair.
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key fingerprint is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: SHA256:nkYgHNQFISd9F9VJvbUQcX3K2oI37pw5XIN0WYOC55w root@np0005603541.novalocal
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key's randomart image is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: +---[ECDSA 256]---+
Jan 31 00:57:22 np0005603541 cloud-init[921]: |   .=o++. oo.==+.|
Jan 31 00:57:22 np0005603541 cloud-init[921]: |   . =o ...o o+.*|
Jan 31 00:57:22 np0005603541 cloud-init[921]: |    o .. .+ o..+*|
Jan 31 00:57:22 np0005603541 cloud-init[921]: |     . .   E. =o |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |        S  o =   |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |       o .. * +  |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |        +  + + . |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |       .   .+o   |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |           .=.   |
Jan 31 00:57:22 np0005603541 cloud-init[921]: +----[SHA256]-----+
Jan 31 00:57:22 np0005603541 cloud-init[921]: Generating public/private ed25519 key pair.
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 00:57:22 np0005603541 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key fingerprint is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: SHA256:dGMVpili/YuZa2RgGqYqq/I7eCsRAeerDcGtf0KE3tQ root@np0005603541.novalocal
Jan 31 00:57:22 np0005603541 cloud-init[921]: The key's randomart image is:
Jan 31 00:57:22 np0005603541 cloud-init[921]: +--[ED25519 256]--+
Jan 31 00:57:22 np0005603541 cloud-init[921]: |o .         +.   |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |ooo .  .   =     |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |.+.+ Eo + *      |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |o.=.o.oo = .     |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |.+.= + .S .      |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |.++ .   o+ .     |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |.+.o . o+ .      |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |* o o   ..       |
Jan 31 00:57:22 np0005603541 cloud-init[921]: |*=++   ..        |
Jan 31 00:57:22 np0005603541 cloud-init[921]: +----[SHA256]-----+
Jan 31 00:57:22 np0005603541 systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 00:57:22 np0005603541 systemd[1]: Reached target Cloud-config availability.
Jan 31 00:57:22 np0005603541 systemd[1]: Reached target Network is Online.
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Crash recovery kernel arming...
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 00:57:22 np0005603541 systemd[1]: Starting System Logging Service...
Jan 31 00:57:22 np0005603541 sm-notify[1003]: Version 2.5.4 starting
Jan 31 00:57:22 np0005603541 systemd[1]: Starting OpenSSH server daemon...
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Permit User Sessions...
Jan 31 00:57:22 np0005603541 systemd[1]: Started Notify NFS peers of a restart.
Jan 31 00:57:22 np0005603541 systemd[1]: Started OpenSSH server daemon.
Jan 31 00:57:22 np0005603541 systemd[1]: Finished Permit User Sessions.
Jan 31 00:57:22 np0005603541 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Jan 31 00:57:22 np0005603541 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 00:57:22 np0005603541 systemd[1]: Started System Logging Service.
Jan 31 00:57:22 np0005603541 systemd[1]: Started Command Scheduler.
Jan 31 00:57:22 np0005603541 systemd[1]: Started Getty on tty1.
Jan 31 00:57:22 np0005603541 systemd[1]: Started Serial Getty on ttyS0.
Jan 31 00:57:22 np0005603541 systemd[1]: Reached target Login Prompts.
Jan 31 00:57:22 np0005603541 systemd[1]: Reached target Multi-User System.
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 00:57:22 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:57:22 np0005603541 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 00:57:22 np0005603541 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 00:57:22 np0005603541 chronyd[826]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 00:57:22 np0005603541 cloud-init[1066]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 05:57:22 +0000. Up 109.06 seconds.
Jan 31 00:57:22 np0005603541 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Jan 31 00:57:22 np0005603541 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 00:57:22 np0005603541 systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 00:57:22 np0005603541 systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 00:57:23 np0005603541 cloud-init[1227]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 05:57:23 +0000. Up 109.55 seconds.
Jan 31 00:57:23 np0005603541 cloud-init[1270]: #############################################################
Jan 31 00:57:23 np0005603541 cloud-init[1277]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 00:57:23 np0005603541 cloud-init[1280]: 256 SHA256:nkYgHNQFISd9F9VJvbUQcX3K2oI37pw5XIN0WYOC55w root@np0005603541.novalocal (ECDSA)
Jan 31 00:57:23 np0005603541 cloud-init[1283]: 256 SHA256:dGMVpili/YuZa2RgGqYqq/I7eCsRAeerDcGtf0KE3tQ root@np0005603541.novalocal (ED25519)
Jan 31 00:57:23 np0005603541 cloud-init[1287]: 3072 SHA256:BEk9LBQz3kvvVFrXc04LeS+CtTF6IOgj2AF+5TLq2+g root@np0005603541.novalocal (RSA)
Jan 31 00:57:23 np0005603541 cloud-init[1290]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 00:57:23 np0005603541 cloud-init[1291]: #############################################################
Jan 31 00:57:23 np0005603541 cloud-init[1227]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 05:57:23 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 109.83 seconds
Jan 31 00:57:23 np0005603541 dracut[1302]: dracut-057-102.git20250818.el9
Jan 31 00:57:23 np0005603541 systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 00:57:23 np0005603541 systemd[1]: Reached target Cloud-init target.
Jan 31 00:57:23 np0005603541 dracut[1304]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:24 np0005603541 dracut[1304]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: memstrack is not available
Jan 31 00:57:25 np0005603541 dracut[1304]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 00:57:25 np0005603541 dracut[1304]: memstrack is not available
Jan 31 00:57:25 np0005603541 dracut[1304]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 00:57:25 np0005603541 dracut[1304]: *** Including module: systemd ***
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: fips ***
Jan 31 00:57:26 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: systemd-initrd ***
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: i18n ***
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: drm ***
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: prefixdevname ***
Jan 31 00:57:26 np0005603541 dracut[1304]: *** Including module: kernel-modules ***
Jan 31 00:57:26 np0005603541 kernel: block vda: the capability attribute has been deprecated.
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: kernel-modules-extra ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: qemu ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: fstab-sys ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: rootfs-block ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: terminfo ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: udev-rules ***
Jan 31 00:57:27 np0005603541 dracut[1304]: Skipping udev rule: 91-permissions.rules
Jan 31 00:57:27 np0005603541 dracut[1304]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: virtiofs ***
Jan 31 00:57:27 np0005603541 dracut[1304]: *** Including module: dracut-systemd ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: usrmount ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: base ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: fs-lib ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: kdumpbase ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 00:57:28 np0005603541 dracut[1304]:  microcode_ctl module: mangling fw_dir
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 00:57:28 np0005603541 dracut[1304]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: openssl ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: shutdown ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including module: squash ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Including modules done ***
Jan 31 00:57:28 np0005603541 dracut[1304]: *** Installing kernel module dependencies ***
Jan 31 00:57:29 np0005603541 dracut[1304]: *** Installing kernel module dependencies done ***
Jan 31 00:57:29 np0005603541 dracut[1304]: *** Resolving executable dependencies ***
Jan 31 00:57:30 np0005603541 dracut[1304]: *** Resolving executable dependencies done ***
Jan 31 00:57:30 np0005603541 dracut[1304]: *** Generating early-microcode cpio image ***
Jan 31 00:57:30 np0005603541 dracut[1304]: *** Store current command line parameters ***
Jan 31 00:57:30 np0005603541 dracut[1304]: Stored kernel commandline:
Jan 31 00:57:30 np0005603541 dracut[1304]: No dracut internal kernel commandline stored in the initramfs
Jan 31 00:57:31 np0005603541 dracut[1304]: *** Install squash loader ***
Jan 31 00:57:31 np0005603541 dracut[1304]: *** Squashing the files inside the initramfs ***
Jan 31 00:57:32 np0005603541 dracut[1304]: *** Squashing the files inside the initramfs done ***
Jan 31 00:57:32 np0005603541 dracut[1304]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 00:57:32 np0005603541 dracut[1304]: *** Hardlinking files ***
Jan 31 00:57:32 np0005603541 dracut[1304]: *** Hardlinking files done ***
Jan 31 00:57:33 np0005603541 dracut[1304]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 00:57:33 np0005603541 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Jan 31 00:57:33 np0005603541 kdumpctl[1017]: kdump: Starting kdump: [OK]
Jan 31 00:57:33 np0005603541 systemd[1]: Finished Crash recovery kernel arming.
Jan 31 00:57:33 np0005603541 systemd[1]: Startup finished in 1.279s (kernel) + 1min 33.233s (initrd) + 25.760s (userspace) = 2min 273ms.
Jan 31 00:57:44 np0005603541 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 00:58:06 np0005603541 systemd[1]: Created slice User Slice of UID 1000.
Jan 31 00:58:06 np0005603541 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 00:58:06 np0005603541 systemd-logind[817]: New session 1 of user zuul.
Jan 31 00:58:06 np0005603541 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 00:58:06 np0005603541 systemd[1]: Starting User Manager for UID 1000...
Jan 31 00:58:06 np0005603541 systemd[4309]: Queued start job for default target Main User Target.
Jan 31 00:58:06 np0005603541 systemd[4309]: Created slice User Application Slice.
Jan 31 00:58:06 np0005603541 systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 00:58:06 np0005603541 systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 00:58:06 np0005603541 systemd[4309]: Reached target Paths.
Jan 31 00:58:06 np0005603541 systemd[4309]: Reached target Timers.
Jan 31 00:58:06 np0005603541 systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 31 00:58:06 np0005603541 systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 31 00:58:06 np0005603541 systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 31 00:58:06 np0005603541 systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 31 00:58:06 np0005603541 systemd[4309]: Reached target Sockets.
Jan 31 00:58:06 np0005603541 systemd[4309]: Reached target Basic System.
Jan 31 00:58:06 np0005603541 systemd[4309]: Reached target Main User Target.
Jan 31 00:58:06 np0005603541 systemd[4309]: Startup finished in 161ms.
Jan 31 00:58:06 np0005603541 systemd[1]: Started User Manager for UID 1000.
Jan 31 00:58:06 np0005603541 systemd[1]: Started Session 1 of User zuul.
Jan 31 00:58:07 np0005603541 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 00:58:17 np0005603541 python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 00:58:26 np0005603541 python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 00:58:27 np0005603541 python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 00:58:29 np0005603541 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcT1sSKuKtP7Uq0MjNeQVuuPZJ5GQUMajyU1EMc7oKP7OPKur09xKegjfcQuJ1YWEngwomzcSh03o58EEcgHL7twSbbSV/tl19q0h1wtkobuk8zVRYN7tQa/7+Dp5jl1JUbLX1piCb+tuLUgQDdiulTbxlRD4ovZ8WuAKA8vVM7sVyANXJcBRjLRxQdcjys7R20df/sj4ryBJdnPmzVbP4EqMdexQEtCt/8FlC0Ih5W8Z5u3Z9XeqrzpR7MPmKSx2txi89bf82EtuA0X6ZdTxuY6yJSodI2XrTK6TPFQozJ+Qb2JQjFHOiFnKkkIkK/CeG0AQfXMUP/5RHLcPOwZzfDXmDzokfChY+tN1a5ypSxAK/QireQfgbN5UOS4Dj6dH8pdH392T4G8cpNm5P/bExl4G3EOnEbScCZ0h9faJPLV75PCEpymPzxDh7ufyymt/r+VWPlCDQkO3SUOzmgy4p/jCsJcOoEIoUrl7gneWKh/R9DdZ0jOS9uKURThmglcs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:29 np0005603541 python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:30 np0005603541 python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:30 np0005603541 python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839110.0989273-251-206234066086558/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9705b2ecf4824110a053f3cefa64f45f_id_rsa follow=False checksum=c62d5adb5a9253804fdd8540f659bb7cecfeeed4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:31 np0005603541 python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:31 np0005603541 python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839111.0872061-306-84594411521804/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9705b2ecf4824110a053f3cefa64f45f_id_rsa.pub follow=False checksum=edbba6552c915a7dc3463e232002c18fc71ee9d0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:33 np0005603541 python3[4979]: ansible-ping Invoked with data=pong
Jan 31 00:58:34 np0005603541 python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 00:58:36 np0005603541 python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 00:58:38 np0005603541 python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:38 np0005603541 python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:38 np0005603541 python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:39 np0005603541 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:39 np0005603541 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:39 np0005603541 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:41 np0005603541 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:41 np0005603541 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:42 np0005603541 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839121.4471674-31-223371934114193/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:43 np0005603541 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:43 np0005603541 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:43 np0005603541 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:43 np0005603541 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:44 np0005603541 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:44 np0005603541 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:44 np0005603541 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:44 np0005603541 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:45 np0005603541 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:45 np0005603541 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:45 np0005603541 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:45 np0005603541 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:46 np0005603541 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:46 np0005603541 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:46 np0005603541 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:46 np0005603541 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:47 np0005603541 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:47 np0005603541 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:47 np0005603541 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:48 np0005603541 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:48 np0005603541 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:48 np0005603541 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:48 np0005603541 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:49 np0005603541 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:49 np0005603541 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:49 np0005603541 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 00:58:52 np0005603541 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 00:58:52 np0005603541 systemd[1]: Starting Time & Date Service...
Jan 31 00:58:52 np0005603541 systemd[1]: Started Time & Date Service.
Jan 31 00:58:52 np0005603541 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 31 00:58:53 np0005603541 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:53 np0005603541 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:53 np0005603541 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769839133.3877802-251-280737492059376/source _original_basename=tmpha_re2j_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:54 np0005603541 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:54 np0005603541 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839134.157418-301-88961860072678/source _original_basename=tmp4bz_ai_w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:55 np0005603541 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:55 np0005603541 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839135.2299554-381-48560660169510/source _original_basename=tmpis2u1lw8 follow=False checksum=b61b8b67cbeabdb25607a6c3ed0750848521994a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:56 np0005603541 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 00:58:56 np0005603541 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 00:58:57 np0005603541 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 00:58:57 np0005603541 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839136.9034815-451-167867598609664/source _original_basename=tmp9tahv99r follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:58:58 np0005603541 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-fbab-701a-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 00:58:58 np0005603541 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-fbab-701a-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 00:59:00 np0005603541 python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 00:59:22 np0005603541 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 00:59:36 np0005603541 python3[6950]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 01:00:23 np0005603541 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 01:00:23 np0005603541 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2327] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:00:23 np0005603541 systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2428] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2450] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2453] device (eth1): carrier: link connected
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2455] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2461] policy: auto-activating connection 'Wired connection 1' (b11f7c16-9ea4-360a-b2de-a9062d089551)
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2465] device (eth1): Activation: starting connection 'Wired connection 1' (b11f7c16-9ea4-360a-b2de-a9062d089551)
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2467] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2470] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2475] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:00:23 np0005603541 NetworkManager[854]: <info>  [1769839223.2479] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:00:23 np0005603541 systemd[4309]: Starting Mark boot as successful...
Jan 31 01:00:23 np0005603541 systemd[4309]: Finished Mark boot as successful.
Jan 31 01:00:24 np0005603541 python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-d406-dc6a-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:00:34 np0005603541 python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:00:34 np0005603541 python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839234.1379468-104-50075321726853/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=f6a32957dd089d8eadb4bbac2cdd4725d5cbf7ac backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:00:35 np0005603541 python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:00:35 np0005603541 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 01:00:35 np0005603541 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 01:00:35 np0005603541 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7184] caught SIGTERM, shutting down normally.
Jan 31 01:00:35 np0005603541 systemd[1]: Stopping Network Manager...
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7190] dhcp4 (eth0): canceled DHCP transaction
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7190] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7190] dhcp4 (eth0): state changed no lease
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7191] manager: NetworkManager state is now CONNECTING
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7318] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7318] dhcp4 (eth1): state changed no lease
Jan 31 01:00:35 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:00:35 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:00:35 np0005603541 NetworkManager[854]: <info>  [1769839235.7897] exiting (success)
Jan 31 01:00:35 np0005603541 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 01:00:35 np0005603541 systemd[1]: Stopped Network Manager.
Jan 31 01:00:35 np0005603541 systemd[1]: NetworkManager.service: Consumed 1.748s CPU time, 9.9M memory peak.
Jan 31 01:00:35 np0005603541 systemd[1]: Starting Network Manager...
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.8465] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:991be50c-1b19-4795-a191-f9fb0ceb117c)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.8468] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.8524] manager[0x55e30417c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:00:35 np0005603541 systemd[1]: Starting Hostname Service...
Jan 31 01:00:35 np0005603541 systemd[1]: Started Hostname Service.
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9292] hostname: hostname: using hostnamed
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9294] hostname: static hostname changed from (none) to "np0005603541.novalocal"
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9302] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9308] manager[0x55e30417c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9308] manager[0x55e30417c000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9348] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9348] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9349] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9350] manager: Networking is enabled by state file
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9353] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9359] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9401] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9416] dhcp: init: Using DHCP client 'internal'
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9421] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9431] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9440] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9457] device (lo): Activation: starting connection 'lo' (6a956e3f-91e5-480d-b46c-6c22e1e7ca7a)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9467] device (eth0): carrier: link connected
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9476] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9486] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9488] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9499] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9510] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9519] device (eth1): carrier: link connected
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9525] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9536] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b11f7c16-9ea4-360a-b2de-a9062d089551) (indicated)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9538] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9546] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9558] device (eth1): Activation: starting connection 'Wired connection 1' (b11f7c16-9ea4-360a-b2de-a9062d089551)
Jan 31 01:00:35 np0005603541 systemd[1]: Started Network Manager.
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9568] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9581] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9585] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9588] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9591] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9595] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9598] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9601] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9604] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9615] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9622] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9637] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9642] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9667] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9672] dhcp4 (eth0): state changed new lease, address=38.102.83.251
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9681] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9690] device (lo): Activation: successful, device activated.
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9706] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:00:35 np0005603541 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:00:35 np0005603541 NetworkManager[7199]: <info>  [1769839235.9977] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:00:36 np0005603541 NetworkManager[7199]: <info>  [1769839235.9999] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:00:36 np0005603541 NetworkManager[7199]: <info>  [1769839236.0003] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:00:36 np0005603541 NetworkManager[7199]: <info>  [1769839236.0007] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:00:36 np0005603541 NetworkManager[7199]: <info>  [1769839236.0010] device (eth0): Activation: successful, device activated.
Jan 31 01:00:36 np0005603541 NetworkManager[7199]: <info>  [1769839236.0016] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:00:36 np0005603541 python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-d406-dc6a-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:00:46 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:01:05 np0005603541 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.5984] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:01:21 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:01:21 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6269] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6272] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6280] device (eth1): Activation: successful, device activated.
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6287] manager: startup complete
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6290] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <warn>  [1769839281.6296] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6302] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6499] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6502] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6502] dhcp4 (eth1): state changed no lease
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6527] policy: auto-activating connection 'ci-private-network' (94ec5c75-c852-55dd-83db-8db69359c060)
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6535] device (eth1): Activation: starting connection 'ci-private-network' (94ec5c75-c852-55dd-83db-8db69359c060)
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6536] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6540] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6549] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.6561] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.7356] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.7359] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:01:21 np0005603541 NetworkManager[7199]: <info>  [1769839281.7364] device (eth1): Activation: successful, device activated.
Jan 31 01:01:31 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:01:36 np0005603541 systemd-logind[817]: Session 1 logged out. Waiting for processes to exit.
Jan 31 01:02:48 np0005603541 systemd-logind[817]: New session 3 of user zuul.
Jan 31 01:02:48 np0005603541 systemd[1]: Started Session 3 of User zuul.
Jan 31 01:02:48 np0005603541 python3[7394]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:02:48 np0005603541 python3[7467]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839368.2431755-373-92283992798580/source _original_basename=tmpukn1614f follow=False checksum=ae76df2b21206ed64f24b43f0a068022f10b8b37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:02:53 np0005603541 systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 01:02:53 np0005603541 systemd-logind[817]: Session 3 logged out. Waiting for processes to exit.
Jan 31 01:02:53 np0005603541 systemd-logind[817]: Removed session 3.
Jan 31 01:03:51 np0005603541 systemd[4309]: Created slice User Background Tasks Slice.
Jan 31 01:03:51 np0005603541 systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 01:03:51 np0005603541 systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 01:10:51 np0005603541 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 01:10:52 np0005603541 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 01:10:52 np0005603541 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 01:10:52 np0005603541 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 01:11:23 np0005603541 systemd-logind[817]: New session 4 of user zuul.
Jan 31 01:11:23 np0005603541 systemd[1]: Started Session 4 of User zuul.
Jan 31 01:11:23 np0005603541 python3[7532]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bc1d-b10c-000000000cb4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:24 np0005603541 python3[7561]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:24 np0005603541 python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:24 np0005603541 python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:25 np0005603541 python3[7639]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:25 np0005603541 python3[7665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:26 np0005603541 python3[7743]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:11:26 np0005603541 python3[7816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839885.836996-374-91996225967843/source _original_basename=tmpv4xwwjcr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:27 np0005603541 python3[7866]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 01:11:27 np0005603541 systemd[1]: Reloading.
Jan 31 01:11:27 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:11:29 np0005603541 python3[7922]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 01:11:29 np0005603541 python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:30 np0005603541 python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:30 np0005603541 python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:30 np0005603541 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:31 np0005603541 python3[8059]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bc1d-b10c-000000000cbb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:11:31 np0005603541 python3[8089]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:11:34 np0005603541 systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 01:11:34 np0005603541 systemd[1]: session-4.scope: Consumed 3.822s CPU time.
Jan 31 01:11:34 np0005603541 systemd-logind[817]: Session 4 logged out. Waiting for processes to exit.
Jan 31 01:11:34 np0005603541 systemd-logind[817]: Removed session 4.
Jan 31 01:11:36 np0005603541 systemd-logind[817]: New session 5 of user zuul.
Jan 31 01:11:36 np0005603541 systemd[1]: Started Session 5 of User zuul.
Jan 31 01:11:37 np0005603541 python3[8124]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 01:11:56 np0005603541 setsebool[8166]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 01:11:56 np0005603541 setsebool[8166]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 01:12:19 np0005603541 kernel: SELinux:  Converting 386 SID table entries...
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:12:19 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  Converting 389 SID table entries...
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:12:34 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:13:02 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:13:02 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:13:02 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:13:02 np0005603541 systemd[1]: Reloading.
Jan 31 01:13:02 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:13:02 np0005603541 systemd[1]: Starting dnf makecache...
Jan 31 01:13:02 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:13:02 np0005603541 dnf[9015]: Failed determining last makecache time.
Jan 31 01:13:04 np0005603541 dnf[9015]: CentOS Stream 9 - BaseOS                         50 kB/s | 6.1 kB     00:00
Jan 31 01:13:04 np0005603541 dnf[9015]: CentOS Stream 9 - AppStream                      28 kB/s | 6.5 kB     00:00
Jan 31 01:13:05 np0005603541 dnf[9015]: CentOS Stream 9 - CRB                            52 kB/s | 6.0 kB     00:00
Jan 31 01:13:05 np0005603541 dnf[9015]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 31 01:13:05 np0005603541 dnf[9015]: Metadata cache created.
Jan 31 01:13:05 np0005603541 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 01:13:05 np0005603541 systemd[1]: Finished dnf makecache.
Jan 31 01:13:12 np0005603541 python3[14157]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-f5b1-e33a-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:13:14 np0005603541 kernel: evm: overlay not supported
Jan 31 01:13:14 np0005603541 systemd[4309]: Starting D-Bus User Message Bus...
Jan 31 01:13:14 np0005603541 dbus-broker-launch[14671]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 01:13:14 np0005603541 dbus-broker-launch[14671]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 01:13:14 np0005603541 systemd[4309]: Started D-Bus User Message Bus.
Jan 31 01:13:14 np0005603541 dbus-broker-lau[14671]: Ready
Jan 31 01:13:14 np0005603541 systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:13:14 np0005603541 systemd[4309]: Created slice Slice /user.
Jan 31 01:13:14 np0005603541 systemd[4309]: podman-14580.scope: unit configures an IP firewall, but not running as root.
Jan 31 01:13:14 np0005603541 systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 01:13:14 np0005603541 systemd[4309]: Started podman-14580.scope.
Jan 31 01:13:14 np0005603541 systemd[4309]: Started podman-pause-dd285f8d.scope.
Jan 31 01:13:15 np0005603541 python3[15024]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.176:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.176:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:13:15 np0005603541 python3[15024]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 01:13:15 np0005603541 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 01:13:15 np0005603541 systemd[1]: session-5.scope: Consumed 41.901s CPU time.
Jan 31 01:13:15 np0005603541 systemd-logind[817]: Session 5 logged out. Waiting for processes to exit.
Jan 31 01:13:15 np0005603541 systemd-logind[817]: Removed session 5.
Jan 31 01:13:44 np0005603541 systemd-logind[817]: New session 6 of user zuul.
Jan 31 01:13:44 np0005603541 systemd[1]: Started Session 6 of User zuul.
Jan 31 01:13:44 np0005603541 python3[27198]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:13:45 np0005603541 python3[27404]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:13:46 np0005603541 python3[27772]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603541.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 01:13:48 np0005603541 python3[28161]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:13:48 np0005603541 python3[28376]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:13:49 np0005603541 python3[28627]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840028.3756428-167-197280924253753/source _original_basename=tmpa7p8fnzw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:13:49 np0005603541 python3[28974]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 01:13:50 np0005603541 systemd[1]: Starting Hostname Service...
Jan 31 01:13:50 np0005603541 systemd[1]: Started Hostname Service.
Jan 31 01:13:50 np0005603541 systemd-hostnamed[29106]: Changed pretty hostname to 'compute-0'
Jan 31 01:13:50 np0005603541 systemd-hostnamed[29106]: Hostname set to <compute-0> (static)
Jan 31 01:13:50 np0005603541 NetworkManager[7199]: <info>  [1769840030.0788] hostname: static hostname changed from "np0005603541.novalocal" to "compute-0"
Jan 31 01:13:50 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:13:50 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:13:50 np0005603541 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 01:13:50 np0005603541 systemd[1]: session-6.scope: Consumed 2.121s CPU time.
Jan 31 01:13:50 np0005603541 systemd-logind[817]: Session 6 logged out. Waiting for processes to exit.
Jan 31 01:13:50 np0005603541 systemd-logind[817]: Removed session 6.
Jan 31 01:13:52 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:13:52 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:13:52 np0005603541 systemd[1]: man-db-cache-update.service: Consumed 42.872s CPU time.
Jan 31 01:13:52 np0005603541 systemd[1]: run-r9fb921449e06497596edd89f248a84d1.service: Deactivated successfully.
Jan 31 01:14:00 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:14:20 np0005603541 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:20:34 np0005603541 systemd-logind[817]: New session 7 of user zuul.
Jan 31 01:20:34 np0005603541 systemd[1]: Started Session 7 of User zuul.
Jan 31 01:20:34 np0005603541 python3[30100]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:20:36 np0005603541 python3[30216]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:37 np0005603541 python3[30289]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:37 np0005603541 python3[30315]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:37 np0005603541 python3[30388]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:37 np0005603541 python3[30414]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:38 np0005603541 python3[30487]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:38 np0005603541 python3[30513]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:38 np0005603541 python3[30586]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:39 np0005603541 python3[30612]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:39 np0005603541 python3[30685]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:39 np0005603541 python3[30711]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:40 np0005603541 python3[30784]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:40 np0005603541 python3[30810]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:20:40 np0005603541 python3[30883]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.3543708-34196-5516011063220/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:20:52 np0005603541 python3[30941]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:25:51 np0005603541 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 01:25:51 np0005603541 systemd[1]: session-7.scope: Consumed 4.518s CPU time.
Jan 31 01:25:51 np0005603541 systemd-logind[817]: Session 7 logged out. Waiting for processes to exit.
Jan 31 01:25:51 np0005603541 systemd-logind[817]: Removed session 7.
Jan 31 01:36:27 np0005603541 systemd-logind[817]: New session 8 of user zuul.
Jan 31 01:36:27 np0005603541 systemd[1]: Started Session 8 of User zuul.
Jan 31 01:36:28 np0005603541 python3.9[31104]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:36:30 np0005603541 python3.9[31285]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:36:44 np0005603541 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 01:36:44 np0005603541 systemd[1]: session-8.scope: Consumed 7.803s CPU time.
Jan 31 01:36:44 np0005603541 systemd-logind[817]: Session 8 logged out. Waiting for processes to exit.
Jan 31 01:36:44 np0005603541 systemd-logind[817]: Removed session 8.
Jan 31 01:36:59 np0005603541 systemd-logind[817]: New session 9 of user zuul.
Jan 31 01:36:59 np0005603541 systemd[1]: Started Session 9 of User zuul.
Jan 31 01:37:00 np0005603541 python3.9[31497]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 01:37:01 np0005603541 python3.9[31671]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:37:02 np0005603541 python3.9[31823]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:37:04 np0005603541 python3.9[31976]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:37:04 np0005603541 python3.9[32128]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:37:05 np0005603541 python3.9[32280]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:37:06 np0005603541 python3.9[32403]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841425.1978395-177-272737297840018/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:37:07 np0005603541 python3.9[32555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:37:07 np0005603541 python3.9[32711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:37:08 np0005603541 python3.9[32863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:37:09 np0005603541 python3.9[33013]: ansible-ansible.builtin.service_facts Invoked
Jan 31 01:37:12 np0005603541 python3.9[33266]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:37:13 np0005603541 python3.9[33416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:37:14 np0005603541 python3.9[33570]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:37:15 np0005603541 python3.9[33728]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:37:16 np0005603541 python3.9[33812]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:38:03 np0005603541 systemd[1]: Reloading.
Jan 31 01:38:03 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:38:03 np0005603541 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 01:38:03 np0005603541 systemd[1]: Reloading.
Jan 31 01:38:03 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:38:03 np0005603541 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 01:38:03 np0005603541 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 01:38:04 np0005603541 systemd[1]: Reloading.
Jan 31 01:38:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:38:04 np0005603541 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 01:38:04 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 01:38:04 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 01:39:02 np0005603541 kernel: SELinux:  Converting 2728 SID table entries...
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:39:02 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:39:02 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 01:39:02 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:39:02 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:39:02 np0005603541 systemd[1]: Reloading.
Jan 31 01:39:02 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:39:02 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:39:03 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:39:03 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:39:03 np0005603541 systemd[1]: run-re26d2e75f7d84facaf5fcc142b29f52a.service: Deactivated successfully.
Jan 31 01:40:03 np0005603541 python3.9[35321]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:06 np0005603541 python3.9[35602]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 01:40:07 np0005603541 python3.9[35754]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 01:40:09 np0005603541 python3.9[35908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:40:10 np0005603541 python3.9[36060]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 01:40:16 np0005603541 python3.9[36212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:40:17 np0005603541 python3.9[36364]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:40:17 np0005603541 python3.9[36487]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841616.6691577-666-126597198437774/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:40:21 np0005603541 python3.9[36639]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:40:22 np0005603541 python3.9[36791]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:23 np0005603541 python3.9[36944]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:40:25 np0005603541 python3.9[37096]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 01:40:25 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:40:25 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:40:26 np0005603541 python3.9[37250]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:40:27 np0005603541 python3.9[37409]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 01:40:28 np0005603541 python3.9[37569]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 01:40:29 np0005603541 python3.9[37722]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:40:30 np0005603541 python3.9[37880]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 01:40:31 np0005603541 python3.9[38032]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:40:33 np0005603541 python3.9[38185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:40:34 np0005603541 python3.9[38337]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:40:34 np0005603541 python3.9[38460]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841633.9194176-1023-273744986279334/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:40:35 np0005603541 python3.9[38612]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:40:36 np0005603541 systemd[1]: Starting Load Kernel Modules...
Jan 31 01:40:36 np0005603541 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 01:40:36 np0005603541 kernel: Bridge firewalling registered
Jan 31 01:40:36 np0005603541 systemd-modules-load[38616]: Inserted module 'br_netfilter'
Jan 31 01:40:36 np0005603541 systemd[1]: Finished Load Kernel Modules.
Jan 31 01:40:36 np0005603541 python3.9[38772]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:40:37 np0005603541 python3.9[38895]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841636.3231277-1092-91506667060965/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:40:38 np0005603541 python3.9[39047]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:40:41 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 01:40:41 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 01:40:41 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:40:41 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:40:41 np0005603541 systemd[1]: Reloading.
Jan 31 01:40:41 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:40:41 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:40:44 np0005603541 python3.9[42312]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:40:45 np0005603541 python3.9[42948]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 01:40:45 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:40:45 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:40:45 np0005603541 systemd[1]: man-db-cache-update.service: Consumed 3.900s CPU time.
Jan 31 01:40:45 np0005603541 systemd[1]: run-rd48ade34280c468fb348f995e686e99f.service: Deactivated successfully.
Jan 31 01:40:45 np0005603541 python3.9[43099]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:40:46 np0005603541 python3.9[43251]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:46 np0005603541 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 01:40:47 np0005603541 systemd[1]: Starting Authorization Manager...
Jan 31 01:40:47 np0005603541 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 01:40:47 np0005603541 polkitd[43468]: Started polkitd version 0.117
Jan 31 01:40:47 np0005603541 systemd[1]: Started Authorization Manager.
Jan 31 01:40:48 np0005603541 python3.9[43638]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:40:48 np0005603541 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 01:40:48 np0005603541 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 01:40:48 np0005603541 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 01:40:48 np0005603541 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 01:40:48 np0005603541 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 01:40:50 np0005603541 python3.9[43800]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 01:40:53 np0005603541 python3.9[43952]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:40:53 np0005603541 systemd[1]: Reloading.
Jan 31 01:40:53 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:40:54 np0005603541 python3.9[44140]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:40:54 np0005603541 systemd[1]: Reloading.
Jan 31 01:40:54 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:40:55 np0005603541 python3.9[44329]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:56 np0005603541 python3.9[44482]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:56 np0005603541 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 01:40:57 np0005603541 python3.9[44635]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:59 np0005603541 python3.9[44797]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:41:02 np0005603541 python3.9[44951]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:41:02 np0005603541 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 01:41:02 np0005603541 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 01:41:02 np0005603541 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 01:41:02 np0005603541 systemd[1]: Starting Apply Kernel Variables...
Jan 31 01:41:02 np0005603541 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 01:41:02 np0005603541 systemd[1]: Finished Apply Kernel Variables.
Jan 31 01:41:02 np0005603541 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 01:41:02 np0005603541 systemd[1]: session-9.scope: Consumed 2min 1.738s CPU time.
Jan 31 01:41:02 np0005603541 systemd-logind[817]: Session 9 logged out. Waiting for processes to exit.
Jan 31 01:41:02 np0005603541 systemd-logind[817]: Removed session 9.
Jan 31 01:41:09 np0005603541 systemd-logind[817]: New session 10 of user zuul.
Jan 31 01:41:09 np0005603541 systemd[1]: Started Session 10 of User zuul.
Jan 31 01:41:10 np0005603541 python3.9[45134]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:41:11 np0005603541 python3.9[45290]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 01:41:12 np0005603541 python3.9[45443]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:41:13 np0005603541 python3.9[45601]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 01:41:15 np0005603541 python3.9[45761]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:41:15 np0005603541 python3.9[45845]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 01:41:19 np0005603541 python3.9[46009]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:41:30 np0005603541 kernel: SELinux:  Converting 2740 SID table entries...
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:41:30 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:41:30 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 01:41:30 np0005603541 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 01:41:32 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:41:32 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:41:32 np0005603541 systemd[1]: Reloading.
Jan 31 01:41:32 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:41:32 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:41:32 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:41:32 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:41:32 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:41:32 np0005603541 systemd[1]: run-r86e6c44f3f12461da9d4c09f9a7ec01d.service: Deactivated successfully.
Jan 31 01:41:44 np0005603541 python3.9[47108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 01:41:44 np0005603541 systemd[1]: Reloading.
Jan 31 01:41:44 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:41:44 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:41:44 np0005603541 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 01:41:44 np0005603541 chown[47151]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 01:41:44 np0005603541 ovs-ctl[47156]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 01:41:44 np0005603541 ovs-ctl[47156]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 01:41:45 np0005603541 ovs-ctl[47156]: Starting ovsdb-server [  OK  ]
Jan 31 01:41:45 np0005603541 ovs-vsctl[47205]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 01:41:45 np0005603541 ovs-vsctl[47225]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e3f3772b-46c1-4a7f-ae43-0efc80b30197\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 01:41:45 np0005603541 ovs-ctl[47156]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 01:41:45 np0005603541 ovs-ctl[47156]: Enabling remote OVSDB managers [  OK  ]
Jan 31 01:41:45 np0005603541 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 01:41:45 np0005603541 ovs-vsctl[47231]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 01:41:45 np0005603541 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 01:41:45 np0005603541 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 01:41:45 np0005603541 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 01:41:45 np0005603541 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 01:41:45 np0005603541 ovs-ctl[47275]: Inserting openvswitch module [  OK  ]
Jan 31 01:41:45 np0005603541 ovs-ctl[47244]: Starting ovs-vswitchd [  OK  ]
Jan 31 01:41:45 np0005603541 ovs-vsctl[47293]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 01:41:45 np0005603541 ovs-ctl[47244]: Enabling remote OVSDB managers [  OK  ]
Jan 31 01:41:45 np0005603541 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 01:41:45 np0005603541 systemd[1]: Starting Open vSwitch...
Jan 31 01:41:45 np0005603541 systemd[1]: Finished Open vSwitch.
Jan 31 01:41:46 np0005603541 python3.9[47444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:41:47 np0005603541 python3.9[47596]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 01:41:48 np0005603541 kernel: SELinux:  Converting 2754 SID table entries...
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:41:48 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:41:50 np0005603541 python3.9[47751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:41:51 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 01:41:51 np0005603541 python3.9[47909]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:41:53 np0005603541 python3.9[48062]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:41:54 np0005603541 python3.9[48349]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 01:41:55 np0005603541 python3.9[48499]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:41:56 np0005603541 python3.9[48653]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:41:58 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:41:58 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:41:58 np0005603541 systemd[1]: Reloading.
Jan 31 01:41:58 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:41:58 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:41:58 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:41:59 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:41:59 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:41:59 np0005603541 systemd[1]: run-rffa80ad15bd845579d2cab1a9153e664.service: Deactivated successfully.
Jan 31 01:42:04 np0005603541 python3.9[48970]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:42:04 np0005603541 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 01:42:04 np0005603541 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 01:42:04 np0005603541 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 01:42:04 np0005603541 systemd[1]: Stopping Network Manager...
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8271] caught SIGTERM, shutting down normally.
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8283] dhcp4 (eth0): canceled DHCP transaction
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8284] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8284] dhcp4 (eth0): state changed no lease
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8286] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:42:04 np0005603541 NetworkManager[7199]: <info>  [1769841724.8350] exiting (success)
Jan 31 01:42:04 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:42:04 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:42:04 np0005603541 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 01:42:04 np0005603541 systemd[1]: Stopped Network Manager.
Jan 31 01:42:04 np0005603541 systemd[1]: NetworkManager.service: Consumed 22.195s CPU time, 4.2M memory peak, read 0B from disk, written 16.5K to disk.
Jan 31 01:42:04 np0005603541 systemd[1]: Starting Network Manager...
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.8849] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:991be50c-1b19-4795-a191-f9fb0ceb117c)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.8851] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.8898] manager[0x55c947c21000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:42:04 np0005603541 systemd[1]: Starting Hostname Service...
Jan 31 01:42:04 np0005603541 systemd[1]: Started Hostname Service.
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9511] hostname: hostname: using hostnamed
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9512] hostname: static hostname changed from (none) to "compute-0"
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9518] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9526] manager[0x55c947c21000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9526] manager[0x55c947c21000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9555] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9567] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9568] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9569] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9570] manager: Networking is enabled by state file
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9573] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9578] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9616] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9628] dhcp: init: Using DHCP client 'internal'
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9631] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9637] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9644] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9653] device (lo): Activation: starting connection 'lo' (6a956e3f-91e5-480d-b46c-6c22e1e7ca7a)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9660] device (eth0): carrier: link connected
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9664] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9670] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9671] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9679] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9686] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9693] device (eth1): carrier: link connected
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9698] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9704] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (94ec5c75-c852-55dd-83db-8db69359c060) (indicated)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9705] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9709] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9717] device (eth1): Activation: starting connection 'ci-private-network' (94ec5c75-c852-55dd-83db-8db69359c060)
Jan 31 01:42:04 np0005603541 systemd[1]: Started Network Manager.
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9727] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9737] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9739] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9743] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9745] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9748] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9750] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9752] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9754] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9760] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9763] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9780] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9797] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9810] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9812] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9817] device (lo): Activation: successful, device activated.
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9826] dhcp4 (eth0): state changed new lease, address=38.102.83.251
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9834] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:42:04 np0005603541 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9911] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9916] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9920] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9923] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9925] device (eth1): Activation: successful, device activated.
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9968] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9969] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9971] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9973] device (eth0): Activation: successful, device activated.
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9977] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:42:04 np0005603541 NetworkManager[48983]: <info>  [1769841724.9979] manager: startup complete
Jan 31 01:42:05 np0005603541 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:42:06 np0005603541 python3.9[49196]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:42:10 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:42:10 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:42:10 np0005603541 systemd[1]: Reloading.
Jan 31 01:42:10 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:42:10 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:42:10 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:42:11 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:42:11 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:42:11 np0005603541 systemd[1]: run-rb55a6fee25404343b0331b965ca32d0a.service: Deactivated successfully.
Jan 31 01:42:15 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:42:29 np0005603541 python3.9[49657]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:42:30 np0005603541 python3.9[49809]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:31 np0005603541 python3.9[49963]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:32 np0005603541 python3.9[50115]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:32 np0005603541 python3.9[50267]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:33 np0005603541 python3.9[50419]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:34 np0005603541 python3.9[50571]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:42:34 np0005603541 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:42:35 np0005603541 python3.9[50694]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841753.9675605-646-74640744842529/.source _original_basename=.415xipcg follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:36 np0005603541 python3.9[50848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:36 np0005603541 python3.9[51000]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 01:42:37 np0005603541 python3.9[51152]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:39 np0005603541 python3.9[51579]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 01:42:40 np0005603541 ansible-async_wrapper.py[51754]: Invoked with j658800159448 300 /home/zuul/.ansible/tmp/ansible-tmp-1769841760.1196759-844-36299975584087/AnsiballZ_edpm_os_net_config.py _
Jan 31 01:42:40 np0005603541 ansible-async_wrapper.py[51757]: Starting module and watcher
Jan 31 01:42:40 np0005603541 ansible-async_wrapper.py[51757]: Start watching 51758 (300)
Jan 31 01:42:40 np0005603541 ansible-async_wrapper.py[51758]: Start module (51758)
Jan 31 01:42:40 np0005603541 ansible-async_wrapper.py[51754]: Return async_wrapper task started.
Jan 31 01:42:41 np0005603541 python3.9[51759]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 01:42:41 np0005603541 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 01:42:41 np0005603541 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 01:42:41 np0005603541 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 01:42:41 np0005603541 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 01:42:41 np0005603541 kernel: cfg80211: failed to load regulatory.db
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.8387] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.8413] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9018] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9024] audit: op="connection-add" uuid="6b0c6ee8-08ee-43ad-b5d6-bfc5736498db" name="br-ex-br" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9040] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9042] audit: op="connection-add" uuid="f0cd6877-2aa8-4a2f-973d-6a23c1434d9d" name="br-ex-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9055] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9058] audit: op="connection-add" uuid="0252d8c5-24a1-4408-9438-0cccc95f7e5a" name="eth1-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9070] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9073] audit: op="connection-add" uuid="8eba3eb6-4cf6-428f-b174-7f2c395d7284" name="vlan20-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9083] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9085] audit: op="connection-add" uuid="08e6d64d-cc9c-4124-9af0-85dbf47477bc" name="vlan21-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9095] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9097] audit: op="connection-add" uuid="51c9145e-c132-4850-91b8-70e9e79c6ce0" name="vlan22-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9106] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9109] audit: op="connection-add" uuid="45bb6b02-518b-4264-aaf7-e03983608206" name="vlan23-port" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9125] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9141] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9143] audit: op="connection-add" uuid="c43afd4f-53d0-4d2e-b30a-f382e3b68918" name="br-ex-if" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9175] audit: op="connection-update" uuid="94ec5c75-c852-55dd-83db-8db69359c060" name="ci-private-network" args="ipv4.addresses,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.routing-rules,connection.slave-type,connection.controller,connection.timestamp,connection.port-type,connection.master,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ovs-interface.type,ovs-external-ids.data" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9192] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9194] audit: op="connection-add" uuid="49315a15-4ae7-42b1-a6e4-d19ff716f9ed" name="vlan20-if" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9210] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9211] audit: op="connection-add" uuid="cefb452f-602e-431d-8b56-90e76f1e61bd" name="vlan21-if" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9227] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9228] audit: op="connection-add" uuid="b80c174a-7d92-43df-bfeb-44e968a3a89d" name="vlan22-if" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9243] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9245] audit: op="connection-add" uuid="8754bdee-095b-4010-a040-f1a451ea268c" name="vlan23-if" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9256] audit: op="connection-delete" uuid="b11f7c16-9ea4-360a-b2de-a9062d089551" name="Wired connection 1" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9267] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9270] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9277] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9281] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (6b0c6ee8-08ee-43ad-b5d6-bfc5736498db)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9282] audit: op="connection-activate" uuid="6b0c6ee8-08ee-43ad-b5d6-bfc5736498db" name="br-ex-br" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9285] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9286] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9293] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9299] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f0cd6877-2aa8-4a2f-973d-6a23c1434d9d)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9302] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9303] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9309] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9314] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0252d8c5-24a1-4408-9438-0cccc95f7e5a)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9316] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9318] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9323] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9329] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8eba3eb6-4cf6-428f-b174-7f2c395d7284)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9331] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9333] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9339] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9345] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (08e6d64d-cc9c-4124-9af0-85dbf47477bc)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9347] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9348] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9354] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9359] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (51c9145e-c132-4850-91b8-70e9e79c6ce0)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9361] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9363] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9369] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9374] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (45bb6b02-518b-4264-aaf7-e03983608206)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9376] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9380] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9382] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9389] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9390] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9394] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9400] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c43afd4f-53d0-4d2e-b30a-f382e3b68918)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9401] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9406] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9408] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9409] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9411] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9422] device (eth1): disconnecting for new activation request.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9423] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9425] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9427] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9428] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9430] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9431] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9434] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9437] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (49315a15-4ae7-42b1-a6e4-d19ff716f9ed)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9438] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9441] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9442] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9443] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9445] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9445] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9448] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9453] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (cefb452f-602e-431d-8b56-90e76f1e61bd)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9453] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9456] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9458] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9459] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9461] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9462] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9465] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9469] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b80c174a-7d92-43df-bfeb-44e968a3a89d)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9469] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9472] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9473] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9474] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9477] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <warn>  [1769841762.9478] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9480] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9484] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (8754bdee-095b-4010-a040-f1a451ea268c)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9484] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9487] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9488] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9489] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9491] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9502] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9505] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9509] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9511] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9517] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9520] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 kernel: ovs-system: entered promiscuous mode
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9544] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 kernel: Timeout policy base is empty
Jan 31 01:42:42 np0005603541 systemd-udevd[51765]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9557] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9559] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9563] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9566] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9569] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9570] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9574] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9578] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9581] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9583] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9587] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9590] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9593] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9595] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9599] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9602] dhcp4 (eth0): canceled DHCP transaction
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9602] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9603] dhcp4 (eth0): state changed no lease
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9604] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9611] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9615] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51760 uid=0 result="fail" reason="Device is not activated"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9619] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9669] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9673] dhcp4 (eth0): state changed new lease, address=38.102.83.251
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9678] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9705] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9712] device (eth1): disconnecting for new activation request.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9712] audit: op="connection-activate" uuid="94ec5c75-c852-55dd-83db-8db69359c060" name="ci-private-network" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9728] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 01:42:42 np0005603541 kernel: br-ex: entered promiscuous mode
Jan 31 01:42:42 np0005603541 kernel: vlan22: entered promiscuous mode
Jan 31 01:42:42 np0005603541 systemd-udevd[51766]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9862] device (eth1): Activation: starting connection 'ci-private-network' (94ec5c75-c852-55dd-83db-8db69359c060)
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9866] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9884] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51760 uid=0 result="success"
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9892] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9894] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 kernel: vlan20: entered promiscuous mode
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9899] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9902] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9920] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9928] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 systemd-udevd[51764]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9929] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9930] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9931] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9932] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9933] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9937] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 kernel: vlan23: entered promiscuous mode
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9947] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9949] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9951] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9954] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9956] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9959] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9961] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9963] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9965] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9967] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9969] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9971] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9982] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 01:42:42 np0005603541 NetworkManager[48983]: <info>  [1769841762.9994] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0002] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 kernel: vlan21: entered promiscuous mode
Jan 31 01:42:43 np0005603541 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0328] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0333] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0339] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0340] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0343] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0346] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0349] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0353] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0359] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0377] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0383] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0385] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0392] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0395] device (eth1): Activation: successful, device activated.
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0406] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0409] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0420] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0425] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0429] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0434] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0439] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0442] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0445] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0451] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0454] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:42:43 np0005603541 NetworkManager[48983]: <info>  [1769841763.0457] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.1600] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.3527] checkpoint[0x55c947bf5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.3529] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 python3.9[52119]: ansible-ansible.legacy.async_status Invoked with jid=j658800159448.51754 mode=status _async_dir=/root/.ansible_async
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.6328] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.6337] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.8068] audit: op="networking-control" arg="global-dns-configuration" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.8103] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.8149] audit: op="networking-control" arg="global-dns-configuration" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.8171] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.9359] checkpoint[0x55c947bf5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 01:42:44 np0005603541 NetworkManager[48983]: <info>  [1769841764.9364] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51760 uid=0 result="success"
Jan 31 01:42:44 np0005603541 ansible-async_wrapper.py[51758]: Module complete (51758)
Jan 31 01:42:45 np0005603541 ansible-async_wrapper.py[51757]: Done in kid B.
Jan 31 01:42:48 np0005603541 python3.9[52224]: ansible-ansible.legacy.async_status Invoked with jid=j658800159448.51754 mode=status _async_dir=/root/.ansible_async
Jan 31 01:42:48 np0005603541 python3.9[52324]: ansible-ansible.legacy.async_status Invoked with jid=j658800159448.51754 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 01:42:49 np0005603541 python3.9[52476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:42:49 np0005603541 python3.9[52599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841768.928825-925-36299981090327/.source.returncode _original_basename=.l3den74a follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:50 np0005603541 python3.9[52751]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:42:51 np0005603541 python3.9[52874]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841770.288769-973-256151677874076/.source.cfg _original_basename=.kpppdy5v follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:42:51 np0005603541 python3.9[53027]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:42:52 np0005603541 systemd[1]: Reloading Network Manager...
Jan 31 01:42:52 np0005603541 NetworkManager[48983]: <info>  [1769841772.0498] audit: op="reload" arg="0" pid=53031 uid=0 result="success"
Jan 31 01:42:52 np0005603541 NetworkManager[48983]: <info>  [1769841772.0510] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 01:42:52 np0005603541 systemd[1]: Reloaded Network Manager.
Jan 31 01:42:52 np0005603541 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 01:42:52 np0005603541 systemd[1]: session-10.scope: Consumed 44.636s CPU time.
Jan 31 01:42:52 np0005603541 systemd-logind[817]: Session 10 logged out. Waiting for processes to exit.
Jan 31 01:42:52 np0005603541 systemd-logind[817]: Removed session 10.
Jan 31 01:42:57 np0005603541 systemd-logind[817]: New session 11 of user zuul.
Jan 31 01:42:57 np0005603541 systemd[1]: Started Session 11 of User zuul.
Jan 31 01:42:58 np0005603541 python3.9[53215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:42:59 np0005603541 python3.9[53369]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:43:00 np0005603541 python3.9[53562]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:43:01 np0005603541 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 01:43:01 np0005603541 systemd[1]: session-11.scope: Consumed 2.078s CPU time.
Jan 31 01:43:01 np0005603541 systemd-logind[817]: Session 11 logged out. Waiting for processes to exit.
Jan 31 01:43:01 np0005603541 systemd-logind[817]: Removed session 11.
Jan 31 01:43:02 np0005603541 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:43:06 np0005603541 systemd-logind[817]: New session 12 of user zuul.
Jan 31 01:43:06 np0005603541 systemd[1]: Started Session 12 of User zuul.
Jan 31 01:43:07 np0005603541 python3.9[53744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:43:08 np0005603541 python3.9[53899]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:43:09 np0005603541 python3.9[54055]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:43:10 np0005603541 python3.9[54139]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:43:12 np0005603541 python3.9[54293]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:43:13 np0005603541 python3.9[54489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:14 np0005603541 python3.9[54641]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:43:14 np0005603541 systemd[1]: var-lib-containers-storage-overlay-compat471528830-merged.mount: Deactivated successfully.
Jan 31 01:43:15 np0005603541 podman[54642]: 2026-01-31 06:43:15.047375838 +0000 UTC m=+0.426490180 system refresh
Jan 31 01:43:15 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:43:16 np0005603541 python3.9[54804]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:43:17 np0005603541 python3.9[54927]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841795.9022763-197-35024420586314/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c6aae091e52d9ba7625f1c471c5cef5f0b1a7daa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:17 np0005603541 python3.9[55079]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:43:18 np0005603541 python3.9[55202]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841797.30468-242-40953152485302/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e5b84cbf5536d1747818507bbe53a53ed67676dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:43:19 np0005603541 python3.9[55354]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:43:19 np0005603541 python3.9[55506]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:43:20 np0005603541 python3.9[55658]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:43:21 np0005603541 python3.9[55810]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:43:21 np0005603541 python3.9[55962]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:43:24 np0005603541 python3.9[56115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:43:25 np0005603541 python3.9[56269]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:43:25 np0005603541 python3.9[56421]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:43:26 np0005603541 python3.9[56573]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:43:27 np0005603541 python3.9[56726]: ansible-service_facts Invoked
Jan 31 01:43:27 np0005603541 network[56743]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 01:43:27 np0005603541 network[56744]: 'network-scripts' will be removed from distribution in near future.
Jan 31 01:43:27 np0005603541 network[56745]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 01:43:31 np0005603541 python3.9[57197]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:43:34 np0005603541 python3.9[57350]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 01:43:36 np0005603541 python3.9[57502]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:43:36 np0005603541 python3.9[57627]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841815.8007534-674-16557051267918/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:37 np0005603541 python3.9[57781]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:43:38 np0005603541 python3.9[57906]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841817.2506483-719-242733251117749/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:39 np0005603541 python3.9[58060]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:42 np0005603541 python3.9[58214]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:43:43 np0005603541 python3.9[58298]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:43:46 np0005603541 python3.9[58452]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:43:47 np0005603541 python3.9[58536]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:43:47 np0005603541 chronyd[826]: chronyd exiting
Jan 31 01:43:47 np0005603541 systemd[1]: Stopping NTP client/server...
Jan 31 01:43:47 np0005603541 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 01:43:47 np0005603541 systemd[1]: Stopped NTP client/server.
Jan 31 01:43:47 np0005603541 systemd[1]: Starting NTP client/server...
Jan 31 01:43:47 np0005603541 chronyd[58544]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 01:43:47 np0005603541 chronyd[58544]: Frequency -23.781 +/- 0.152 ppm read from /var/lib/chrony/drift
Jan 31 01:43:47 np0005603541 chronyd[58544]: Loaded seccomp filter (level 2)
Jan 31 01:43:47 np0005603541 systemd[1]: Started NTP client/server.
Jan 31 01:43:48 np0005603541 systemd-logind[817]: Session 12 logged out. Waiting for processes to exit.
Jan 31 01:43:48 np0005603541 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 01:43:48 np0005603541 systemd[1]: session-12.scope: Consumed 22.210s CPU time.
Jan 31 01:43:48 np0005603541 systemd-logind[817]: Removed session 12.
Jan 31 01:43:54 np0005603541 systemd-logind[817]: New session 13 of user zuul.
Jan 31 01:43:54 np0005603541 systemd[1]: Started Session 13 of User zuul.
Jan 31 01:43:54 np0005603541 python3.9[58726]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:55 np0005603541 python3.9[58878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:43:56 np0005603541 python3.9[59001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841834.939623-62-25396489282405/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:43:56 np0005603541 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 01:43:56 np0005603541 systemd[1]: session-13.scope: Consumed 1.321s CPU time.
Jan 31 01:43:56 np0005603541 systemd-logind[817]: Session 13 logged out. Waiting for processes to exit.
Jan 31 01:43:56 np0005603541 systemd-logind[817]: Removed session 13.
Jan 31 01:44:02 np0005603541 systemd-logind[817]: New session 14 of user zuul.
Jan 31 01:44:02 np0005603541 systemd[1]: Started Session 14 of User zuul.
Jan 31 01:44:03 np0005603541 python3.9[59179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:44:04 np0005603541 python3.9[59335]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:05 np0005603541 python3.9[59510]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:05 np0005603541 python3.9[59633]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769841844.519153-83-145317886440743/.source.json _original_basename=.z_ee92jo follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:06 np0005603541 python3.9[59785]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:07 np0005603541 python3.9[59908]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841846.388393-152-46106650929163/.source _original_basename=.090nhy3l follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:08 np0005603541 python3.9[60060]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:44:08 np0005603541 python3.9[60212]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:09 np0005603541 python3.9[60335]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841848.365485-224-94812314265252/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:44:09 np0005603541 python3.9[60487]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:10 np0005603541 python3.9[60610]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841849.359802-224-43821754291758/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:44:11 np0005603541 python3.9[60762]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:11 np0005603541 python3.9[60914]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:12 np0005603541 python3.9[61037]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841851.2038424-335-193560868618190/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:12 np0005603541 python3.9[61189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:13 np0005603541 python3.9[61312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841852.4902217-380-261397129727168/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:14 np0005603541 python3.9[61464]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:44:14 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:14 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:14 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:14 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:14 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:14 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:15 np0005603541 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 01:44:15 np0005603541 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 01:44:15 np0005603541 python3.9[61692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:16 np0005603541 python3.9[61815]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841855.3295596-449-15047309319718/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:17 np0005603541 python3.9[61967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:17 np0005603541 python3.9[62090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841856.680443-494-143569524296369/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:18 np0005603541 python3.9[62242]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:44:18 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:18 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:18 np0005603541 systemd[1]: Starting Create netns directory...
Jan 31 01:44:18 np0005603541 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 01:44:18 np0005603541 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 01:44:18 np0005603541 systemd[1]: Finished Create netns directory.
Jan 31 01:44:19 np0005603541 python3.9[62468]: ansible-ansible.builtin.service_facts Invoked
Jan 31 01:44:19 np0005603541 network[62485]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 01:44:19 np0005603541 network[62486]: 'network-scripts' will be removed from distribution in near future.
Jan 31 01:44:19 np0005603541 network[62487]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 01:44:22 np0005603541 python3.9[62749]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:44:22 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:22 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:22 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:22 np0005603541 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 01:44:23 np0005603541 iptables.init[62790]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 01:44:23 np0005603541 iptables.init[62790]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 01:44:23 np0005603541 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 01:44:23 np0005603541 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 01:44:24 np0005603541 python3.9[62988]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:44:24 np0005603541 python3.9[63142]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:44:24 np0005603541 systemd[1]: Reloading.
Jan 31 01:44:25 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:44:25 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:44:25 np0005603541 systemd[1]: Starting Netfilter Tables...
Jan 31 01:44:25 np0005603541 systemd[1]: Finished Netfilter Tables.
Jan 31 01:44:25 np0005603541 python3.9[63334]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:44:27 np0005603541 python3.9[63487]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:27 np0005603541 python3.9[63612]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841866.6040668-701-94270776194270/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:28 np0005603541 python3.9[63765]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:44:28 np0005603541 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 01:44:28 np0005603541 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 01:44:29 np0005603541 python3.9[63921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:29 np0005603541 python3.9[64073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:30 np0005603541 python3.9[64196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841869.2988615-794-42279202519541/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:31 np0005603541 python3.9[64348]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 01:44:31 np0005603541 systemd[1]: Starting Time & Date Service...
Jan 31 01:44:31 np0005603541 systemd[1]: Started Time & Date Service.
Jan 31 01:44:32 np0005603541 python3.9[64504]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:33 np0005603541 python3.9[64656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:34 np0005603541 python3.9[64779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841873.1728501-899-68903176243702/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:34 np0005603541 python3.9[64931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:35 np0005603541 python3.9[65054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841874.3803394-944-220526569080431/.source.yaml _original_basename=.clt2spnf follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:36 np0005603541 python3.9[65206]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:36 np0005603541 python3.9[65329]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841875.584857-989-31567804395587/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:37 np0005603541 python3.9[65481]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:44:37 np0005603541 python3.9[65634]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:44:38 np0005603541 python3[65787]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 01:44:39 np0005603541 python3.9[65939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:39 np0005603541 python3.9[66062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841879.0462277-1106-108643932305212/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:40 np0005603541 python3.9[66214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:41 np0005603541 python3.9[66337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841880.2897465-1151-205121052168780/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:42 np0005603541 python3.9[66489]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:42 np0005603541 python3.9[66612]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841881.8023233-1196-72109181363022/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:43 np0005603541 python3.9[66764]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:43 np0005603541 python3.9[66887]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841882.9914865-1241-221709066082738/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:44 np0005603541 python3.9[67039]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:44:45 np0005603541 python3.9[67162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841884.2299638-1286-249887005394419/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:46 np0005603541 python3.9[67314]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:46 np0005603541 python3.9[67466]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:44:47 np0005603541 python3.9[67625]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:48 np0005603541 python3.9[67778]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:49 np0005603541 python3.9[67930]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:44:49 np0005603541 python3.9[68082]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 01:44:50 np0005603541 python3.9[68235]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 01:44:51 np0005603541 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 01:44:51 np0005603541 systemd[1]: session-14.scope: Consumed 29.229s CPU time.
Jan 31 01:44:51 np0005603541 systemd-logind[817]: Session 14 logged out. Waiting for processes to exit.
Jan 31 01:44:51 np0005603541 systemd-logind[817]: Removed session 14.
Jan 31 01:45:00 np0005603541 systemd-logind[817]: New session 15 of user zuul.
Jan 31 01:45:00 np0005603541 systemd[1]: Started Session 15 of User zuul.
Jan 31 01:45:01 np0005603541 python3.9[68416]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 01:45:01 np0005603541 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 01:45:02 np0005603541 python3.9[68570]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:45:03 np0005603541 python3.9[68722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:45:04 np0005603541 python3.9[68874]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7oaNruBF82m85jI32p4Mj+yn4T3FBHQ7cMc6lELq3AspplPtBQsmBgDfhjfVg1I4+kEqlqvMmBXvkZu7SGFPiUPQlioc6MCfPrB8/wSLBG/pEWqlStSpdkbOBEEivzl5kpIYrbNpwH3q/sL6mbZB4fYlpLP6SY4uxDutOWZutUUlzDguTJUprXhv8BnwgqPoBM7wwuPY+U9PSdLY8pxG40xO+UQ9llhK0rTX9Io1k8OtlJeJu/zVCmcEIp7bMmk4GLYHzfhe1JW7+O8RnNxmyEbfEZpJRKD+squSzbEC4jYJSF2ZIG9++KZY33LUAy3Krn46o8Bo+vBJX3HRYdgtGaejzyYimDJ2OPL+UB5K9tTqqKbQlmhZODmFmTVgZabEHzHSuT+dTFBmmzW17ll4cWYHemkonjSM+nl3zO9Quwp+HRmkAa5/uJIFeVLZInx7/aeHCar427H5OnfpuSLc1X9uSNlPAvvIdlXagkfCOLBFXlBSPhkDBqBq9MX7u0ic=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4MRNp0lqMmdnWHkBaN0bYiu3NyVZLTvXbzAb78HL/H#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrqTuBK9SuQu9hS9hBIqRv9weMcR5IS3TOGti2Gz24hxwuCxS2PuVSyWVacVoXmRrXt6Nl3b5KRQ35C6gTvbIU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCux/eS/9tJWdvcz7CSqzbT3/CFFfMIoClo+OiLmW4DHDCsL7b4Sd8s4ZGetrM/b9d+nZhH3I0np2S0wkbf0kzxDpFnzV/CqSLPcHC1GFG8DlXIWkbbK3H9Nc+il8eG2rceqOXs5LCS6H6lOeSAynOJd7kkW0euL4YtQcqH6/PCpvaHnyAXOL9+76w6apGzrWBRGSKGvwJiCrundYhP4TjMSlb6ITyIdF0bE1617p7zZOh+CQt6wB17bBAKL/ZR7qQsjbIhW1zwJ7R0NuWJrgxemGImJ3YRN+2WJ5UpNJxoMPkwC67IfW4avOTykueyK9cACQ/OLPMvhxBVzsBBfmV7Xl5RquVXDj1OrXfG+zVu5YV0+GEtmxZhptXdzBvMkDBAr3hRB/jE/GZeCx/d6eoA3vfyT7tFrBaunMaiIutt/GbmQBhPSqSrqgau7M8rqs7ocyOCZI3ezwskVMxOX8yCOVAib7rHUkj+I+B48V/7MXiHOkBpOBUmgGSiM2whUe8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiG2htD5mCqa+IIAJsjOKgNJpPNmrlfh2g7QGI6KcQd#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG8QHiFr+d3LEQcNktaGAAZTvvRlNt/N3ZuLInnbRWqbA8w9jqUbMmg6m0Yc2Z+a+4iHrAMgRl5PGiHtvzbSe78=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVmjyOgMrBcNkKRe/3MkTqg/LhVt3sOvBD2IwLvjJmLe3cxmmFlu3iixT4LIzRscHQxUt6EqOuAiYL2BapPTTPjEaB+TseppBVXIPZfjllMgVy8pSqsZa+MUsbI4pONfcoart2REu5ObJIPOSl3YDAkGB+rxeAE1BD+sYmdlKriC/2JkUcS6p03QSjQnukMP476+uzXmPHLvm7A9TJjN2Oa4FkgJFI8+gFZaKPpHzCdoYD8COI0LYpp49uJ0gHQ7E4AepcpNUZXBgEsYKntsF9J/md1b13dW0ucGniV3eVxfWAH3xMRlwfFrT8TB+iQ74ghNmDEY/CCpZwkpL4W6bV7GT4+3nbvWIJv9/dgPSqeunTbbAWPEu6KM0nOuOGVRtQ6+q4aM3TRwV0DUvZptSGhRnHOekdOBRtiuMOnClub09PJMyOr4fKi3e59CfIx36NjxbNZfwA1j9jS3BDHL5BtATwiuTVMUWtdRYUT0h4zdmDtHkVnnPQBm2C3d7o/8c=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHThs9i/0cwyfrem5xVfEov0dwlVT7YQsUAzvhlKxVcU#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPv7c3x32Z77V8zjbPteGtuwIl3HzfI8HP5le/fNUtef+zMbIe6oyaIlzMLTKYnfaTTkKeVwM+hyTawD64NkAc=#012 create=True mode=0644 path=/tmp/ansible.4x3db8g_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:45:05 np0005603541 python3.9[69026]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.4x3db8g_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:06 np0005603541 python3.9[69180]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.4x3db8g_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:45:06 np0005603541 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 01:45:06 np0005603541 systemd[1]: session-15.scope: Consumed 2.893s CPU time.
Jan 31 01:45:06 np0005603541 systemd-logind[817]: Session 15 logged out. Waiting for processes to exit.
Jan 31 01:45:06 np0005603541 systemd-logind[817]: Removed session 15.
Jan 31 01:45:12 np0005603541 systemd-logind[817]: New session 16 of user zuul.
Jan 31 01:45:12 np0005603541 systemd[1]: Started Session 16 of User zuul.
Jan 31 01:45:13 np0005603541 python3.9[69358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:45:14 np0005603541 python3.9[69514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 01:45:15 np0005603541 python3.9[69668]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:45:16 np0005603541 python3.9[69821]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:16 np0005603541 python3.9[69974]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:45:17 np0005603541 python3.9[70128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:18 np0005603541 python3.9[70283]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:45:19 np0005603541 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 01:45:19 np0005603541 systemd[1]: session-16.scope: Consumed 3.658s CPU time.
Jan 31 01:45:19 np0005603541 systemd-logind[817]: Session 16 logged out. Waiting for processes to exit.
Jan 31 01:45:19 np0005603541 systemd-logind[817]: Removed session 16.
Jan 31 01:45:25 np0005603541 systemd-logind[817]: New session 17 of user zuul.
Jan 31 01:45:25 np0005603541 systemd[1]: Started Session 17 of User zuul.
Jan 31 01:45:26 np0005603541 python3.9[70461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:45:28 np0005603541 python3.9[70617]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:45:28 np0005603541 python3.9[70701]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 01:45:31 np0005603541 python3.9[70852]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:33 np0005603541 python3.9[71003]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 01:45:34 np0005603541 python3.9[71153]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:45:34 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:45:34 np0005603541 python3.9[71304]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:45:35 np0005603541 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 01:45:35 np0005603541 systemd[1]: session-17.scope: Consumed 5.095s CPU time.
Jan 31 01:45:35 np0005603541 systemd-logind[817]: Session 17 logged out. Waiting for processes to exit.
Jan 31 01:45:35 np0005603541 systemd-logind[817]: Removed session 17.
Jan 31 01:45:43 np0005603541 systemd-logind[817]: New session 18 of user zuul.
Jan 31 01:45:43 np0005603541 systemd[1]: Started Session 18 of User zuul.
Jan 31 01:45:50 np0005603541 python3[72070]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:45:52 np0005603541 python3[72165]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 01:45:53 np0005603541 python3[72192]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:45:54 np0005603541 python3[72218]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:54 np0005603541 kernel: loop: module loaded
Jan 31 01:45:54 np0005603541 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 01:45:54 np0005603541 python3[72253]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:45:54 np0005603541 lvm[72257]: PV /dev/loop3 not used.
Jan 31 01:45:54 np0005603541 lvm[72259]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 01:45:54 np0005603541 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 01:45:54 np0005603541 lvm[72265]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 01:45:54 np0005603541 lvm[72269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 01:45:54 np0005603541 lvm[72269]: VG ceph_vg0 finished
Jan 31 01:45:54 np0005603541 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 01:45:55 np0005603541 python3[72347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:45:55 np0005603541 python3[72420]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841955.1094294-37045-150235486343586/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:45:56 np0005603541 python3[72470]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:45:56 np0005603541 systemd[1]: Reloading.
Jan 31 01:45:56 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:45:56 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:45:56 np0005603541 systemd[1]: Starting Ceph OSD losetup...
Jan 31 01:45:56 np0005603541 bash[72511]: /dev/loop3: [64513]:4355663 (/var/lib/ceph-osd-0.img)
Jan 31 01:45:56 np0005603541 systemd[1]: Finished Ceph OSD losetup.
Jan 31 01:45:56 np0005603541 lvm[72512]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 01:45:56 np0005603541 lvm[72512]: VG ceph_vg0 finished
Jan 31 01:45:57 np0005603541 chronyd[58544]: Selected source 147.189.136.126 (pool.ntp.org)
Jan 31 01:45:58 np0005603541 python3[72536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:46:01 np0005603541 python3[72630]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 01:46:03 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:46:03 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:46:03 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:46:03 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:46:03 np0005603541 systemd[1]: run-r979420c2f3f14bff9a0f3c6ab3d7423d.service: Deactivated successfully.
Jan 31 01:46:03 np0005603541 python3[72741]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:04 np0005603541 python3[72769]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:04 np0005603541 python3[72830]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:05 np0005603541 python3[72856]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:05 np0005603541 python3[72934]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:46:06 np0005603541 python3[73007]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841965.5092435-37236-175513372347087/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:06 np0005603541 python3[73109]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:46:07 np0005603541 python3[73182]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841966.540467-37254-191415038279262/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:07 np0005603541 python3[73232]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:07 np0005603541 python3[73260]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:08 np0005603541 python3[73288]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:08 np0005603541 python3[73314]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:08 np0005603541 python3[73340]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:08 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:09 np0005603541 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 01:46:09 np0005603541 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 01:46:09 np0005603541 systemd-logind[817]: New session 19 of user ceph-admin.
Jan 31 01:46:09 np0005603541 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 01:46:09 np0005603541 systemd[1]: Starting User Manager for UID 42477...
Jan 31 01:46:09 np0005603541 systemd[73360]: Queued start job for default target Main User Target.
Jan 31 01:46:09 np0005603541 systemd[73360]: Created slice User Application Slice.
Jan 31 01:46:09 np0005603541 systemd[73360]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 01:46:09 np0005603541 systemd[73360]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 01:46:09 np0005603541 systemd[73360]: Reached target Paths.
Jan 31 01:46:09 np0005603541 systemd[73360]: Reached target Timers.
Jan 31 01:46:09 np0005603541 systemd[73360]: Starting D-Bus User Message Bus Socket...
Jan 31 01:46:09 np0005603541 systemd[73360]: Starting Create User's Volatile Files and Directories...
Jan 31 01:46:09 np0005603541 systemd[73360]: Finished Create User's Volatile Files and Directories.
Jan 31 01:46:09 np0005603541 systemd[73360]: Listening on D-Bus User Message Bus Socket.
Jan 31 01:46:09 np0005603541 systemd[73360]: Reached target Sockets.
Jan 31 01:46:09 np0005603541 systemd[73360]: Reached target Basic System.
Jan 31 01:46:09 np0005603541 systemd[73360]: Reached target Main User Target.
Jan 31 01:46:09 np0005603541 systemd[73360]: Startup finished in 123ms.
Jan 31 01:46:09 np0005603541 systemd[1]: Started User Manager for UID 42477.
Jan 31 01:46:09 np0005603541 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 01:46:09 np0005603541 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 01:46:09 np0005603541 systemd-logind[817]: Session 19 logged out. Waiting for processes to exit.
Jan 31 01:46:09 np0005603541 systemd-logind[817]: Removed session 19.
Jan 31 01:46:11 np0005603541 systemd[1]: var-lib-containers-storage-overlay-compat3993874435-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 01:46:19 np0005603541 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 01:46:19 np0005603541 systemd[73360]: Activating special unit Exit the Session...
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped target Main User Target.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped target Basic System.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped target Paths.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped target Sockets.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped target Timers.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 01:46:19 np0005603541 systemd[73360]: Closed D-Bus User Message Bus Socket.
Jan 31 01:46:19 np0005603541 systemd[73360]: Stopped Create User's Volatile Files and Directories.
Jan 31 01:46:19 np0005603541 systemd[73360]: Removed slice User Application Slice.
Jan 31 01:46:19 np0005603541 systemd[73360]: Reached target Shutdown.
Jan 31 01:46:19 np0005603541 systemd[73360]: Finished Exit the Session.
Jan 31 01:46:19 np0005603541 systemd[73360]: Reached target Exit the Session.
Jan 31 01:46:19 np0005603541 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 01:46:19 np0005603541 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 01:46:19 np0005603541 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 01:46:19 np0005603541 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 01:46:19 np0005603541 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 01:46:19 np0005603541 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 01:46:19 np0005603541 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 01:46:48 np0005603541 podman[73414]: 2026-01-31 06:46:48.68338251 +0000 UTC m=+39.256565120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:48 np0005603541 podman[73478]: 2026-01-31 06:46:48.739229828 +0000 UTC m=+0.030328953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:48 np0005603541 podman[73478]: 2026-01-31 06:46:48.967884559 +0000 UTC m=+0.258983634 container create 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 01:46:49 np0005603541 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 01:46:49 np0005603541 systemd[1]: Started libpod-conmon-91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f.scope.
Jan 31 01:46:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:49 np0005603541 podman[73478]: 2026-01-31 06:46:49.468973243 +0000 UTC m=+0.760072318 container init 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 01:46:49 np0005603541 podman[73478]: 2026-01-31 06:46:49.474980681 +0000 UTC m=+0.766079756 container start 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:46:49 np0005603541 podman[73478]: 2026-01-31 06:46:49.515931904 +0000 UTC m=+0.807031009 container attach 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:46:49 np0005603541 keen_keller[73494]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 01:46:49 np0005603541 systemd[1]: libpod-91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f.scope: Deactivated successfully.
Jan 31 01:46:49 np0005603541 podman[73478]: 2026-01-31 06:46:49.78890477 +0000 UTC m=+1.080003855 container died 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 01:46:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f6610aeff855ca608365c5764efeaf78cf883116ffba7f3d10692e7643ed4321-merged.mount: Deactivated successfully.
Jan 31 01:46:50 np0005603541 podman[73478]: 2026-01-31 06:46:50.971327733 +0000 UTC m=+2.262426808 container remove 91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f (image=quay.io/ceph/ceph:v18, name=keen_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:46:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.019988735 +0000 UTC m=+0.027438152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.157675528 +0000 UTC m=+0.165124895 container create 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:51 np0005603541 systemd[1]: Started libpod-conmon-297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028.scope.
Jan 31 01:46:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.276216461 +0000 UTC m=+0.283665928 container init 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.282775352 +0000 UTC m=+0.290224749 container start 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 01:46:51 np0005603541 tender_hoover[73528]: 167 167
Jan 31 01:46:51 np0005603541 systemd[1]: libpod-297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028.scope: Deactivated successfully.
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.299131802 +0000 UTC m=+0.306581169 container attach 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.299679246 +0000 UTC m=+0.307128633 container died 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:46:51 np0005603541 systemd[1]: libpod-conmon-91156e7005298045bf4f50ba2310adafbd62865879c3af7cf09653bfea7d365f.scope: Deactivated successfully.
Jan 31 01:46:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f5310018c1621a3b6ffd418207d6530b642130066d7438901fa7362ece71a871-merged.mount: Deactivated successfully.
Jan 31 01:46:51 np0005603541 podman[73511]: 2026-01-31 06:46:51.592727664 +0000 UTC m=+0.600177021 container remove 297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028 (image=quay.io/ceph/ceph:v18, name=tender_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:46:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:51 np0005603541 systemd[1]: libpod-conmon-297c535524e774426553b7b00144bced738623c18dba083a799e6ba16934d028.scope: Deactivated successfully.
Jan 31 01:46:51 np0005603541 podman[73546]: 2026-01-31 06:46:51.716505597 +0000 UTC m=+0.108879668 container create 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:51 np0005603541 podman[73546]: 2026-01-31 06:46:51.640070654 +0000 UTC m=+0.032444745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:51 np0005603541 systemd[1]: Started libpod-conmon-393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08.scope.
Jan 31 01:46:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:51 np0005603541 podman[73546]: 2026-01-31 06:46:51.863064606 +0000 UTC m=+0.255438677 container init 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 01:46:51 np0005603541 podman[73546]: 2026-01-31 06:46:51.867003353 +0000 UTC m=+0.259377414 container start 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:51 np0005603541 heuristic_goldstine[73562]: AQBbpX1pkgubNBAAY42Nr163ajIms8oRzBSoSA==
Jan 31 01:46:51 np0005603541 systemd[1]: libpod-393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08.scope: Deactivated successfully.
Jan 31 01:46:52 np0005603541 podman[73546]: 2026-01-31 06:46:52.03341056 +0000 UTC m=+0.425784641 container attach 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:46:52 np0005603541 podman[73546]: 2026-01-31 06:46:52.033794659 +0000 UTC m=+0.426168720 container died 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:46:52 np0005603541 podman[73546]: 2026-01-31 06:46:52.409975174 +0000 UTC m=+0.802349235 container remove 393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08 (image=quay.io/ceph/ceph:v18, name=heuristic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:46:52 np0005603541 systemd[1]: libpod-conmon-393d1623f9499a2a34213a872b132d356097cc7aacb35f7affd52628e0960b08.scope: Deactivated successfully.
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.508853035 +0000 UTC m=+0.082462810 container create d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.446016267 +0000 UTC m=+0.019626062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:52 np0005603541 systemd[1]: Started libpod-conmon-d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db.scope.
Jan 31 01:46:52 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6648f9c7411b0b92cde74ee5945fd56dfb67d38d525e3bc75fd0215f0f0c423b-merged.mount: Deactivated successfully.
Jan 31 01:46:52 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.619322821 +0000 UTC m=+0.192932616 container init d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.625223506 +0000 UTC m=+0.198833281 container start d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 01:46:52 np0005603541 sweet_taussig[73597]: AQBcpX1pcQ9hJhAAZVbMFKnfKN3Kc5pIz4jpcQ==
Jan 31 01:46:52 np0005603541 systemd[1]: libpod-d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db.scope: Deactivated successfully.
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.65191988 +0000 UTC m=+0.225529675 container attach d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:46:52 np0005603541 podman[73581]: 2026-01-31 06:46:52.652362811 +0000 UTC m=+0.225972586 container died d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:46:52 np0005603541 systemd[1]: var-lib-containers-storage-overlay-37f8748fab1f7e48ce5ef47b2743b29b9b7c82eaede33c09a8089d41c24065f5-merged.mount: Deactivated successfully.
Jan 31 01:46:53 np0005603541 podman[73581]: 2026-01-31 06:46:53.044509616 +0000 UTC m=+0.618119401 container remove d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db (image=quay.io/ceph/ceph:v18, name=sweet_taussig, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.079976175 +0000 UTC m=+0.020580805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.178230062 +0000 UTC m=+0.118834672 container create b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:53 np0005603541 systemd[1]: Started libpod-conmon-b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2.scope.
Jan 31 01:46:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.273492876 +0000 UTC m=+0.214097476 container init b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.279971814 +0000 UTC m=+0.220576414 container start b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:46:53 np0005603541 nifty_villani[73633]: AQBdpX1pre60ERAAoBqkM/QMM+unAXnuCXMv6w==
Jan 31 01:46:53 np0005603541 systemd[1]: libpod-b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2.scope: Deactivated successfully.
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.366690008 +0000 UTC m=+0.307294608 container attach b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.368372109 +0000 UTC m=+0.308976709 container died b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:46:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6721b062ac6a8ba44fb7039b7775dbc72bef75f04210bb5be3d2f8d18a05f876-merged.mount: Deactivated successfully.
Jan 31 01:46:53 np0005603541 podman[73617]: 2026-01-31 06:46:53.947175408 +0000 UTC m=+0.887780048 container remove b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2 (image=quay.io/ceph/ceph:v18, name=nifty_villani, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:46:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-conmon-b8eef96dc04ace6d0ee6ed772841de583a344dff6f750eea5951912e6128c2f2.scope: Deactivated successfully.
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-conmon-d98e7c3c525a1afdfac4840237d4ac4c5f343128db21e2cdc386e32d440c42db.scope: Deactivated successfully.
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.080068223 +0000 UTC m=+0.112612450 container create 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:53.99911268 +0000 UTC m=+0.031656927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:54 np0005603541 systemd[1]: Started libpod-conmon-006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd.scope.
Jan 31 01:46:54 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64300a0b4bf63c2781489a2974f76e96f77a7ebcb2f0c70d46f20c490be7cc4f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.204206393 +0000 UTC m=+0.236750710 container init 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.208453207 +0000 UTC m=+0.240997474 container start 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:54 np0005603541 eager_shannon[73669]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 01:46:54 np0005603541 eager_shannon[73669]: setting min_mon_release = pacific
Jan 31 01:46:54 np0005603541 eager_shannon[73669]: /usr/bin/monmaptool: set fsid to ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:54 np0005603541 eager_shannon[73669]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd.scope: Deactivated successfully.
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.356304229 +0000 UTC m=+0.388848496 container attach 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.357230382 +0000 UTC m=+0.389774649 container died 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:46:54 np0005603541 podman[73653]: 2026-01-31 06:46:54.69192047 +0000 UTC m=+0.724464697 container remove 006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd (image=quay.io/ceph/ceph:v18, name=eager_shannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:46:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-conmon-006a42b0f6317d223587d612c82ea642d01100f007dbe9834df07155e488e2dd.scope: Deactivated successfully.
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.768305541 +0000 UTC m=+0.053582104 container create d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:46:54 np0005603541 systemd[1]: Started libpod-conmon-d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896.scope.
Jan 31 01:46:54 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4f9ff6d011ce295e3bd405cb6ff3a3de240d8a4a42b680bfafbca550e204d/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4f9ff6d011ce295e3bd405cb6ff3a3de240d8a4a42b680bfafbca550e204d/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4f9ff6d011ce295e3bd405cb6ff3a3de240d8a4a42b680bfafbca550e204d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4f9ff6d011ce295e3bd405cb6ff3a3de240d8a4a42b680bfafbca550e204d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.823638046 +0000 UTC m=+0.108914619 container init d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.829216073 +0000 UTC m=+0.114492646 container start d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.733912329 +0000 UTC m=+0.019188922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.83358223 +0000 UTC m=+0.118858803 container attach d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896.scope: Deactivated successfully.
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.907803308 +0000 UTC m=+0.193079881 container died d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 01:46:54 np0005603541 podman[73688]: 2026-01-31 06:46:54.945126793 +0000 UTC m=+0.230403376 container remove d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896 (image=quay.io/ceph/ceph:v18, name=boring_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 01:46:54 np0005603541 systemd[1]: libpod-conmon-d2810e2d72d9bbfb2631b073f670b96ef0f2c3aa895bef9b212257b1728c9896.scope: Deactivated successfully.
Jan 31 01:46:55 np0005603541 systemd[1]: Reloading.
Jan 31 01:46:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:46:55 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:55 np0005603541 systemd[1]: Reloading.
Jan 31 01:46:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:46:55 np0005603541 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 01:46:55 np0005603541 systemd[1]: Reloading.
Jan 31 01:46:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:46:55 np0005603541 systemd[1]: Reached target Ceph cluster ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:46:55 np0005603541 systemd[1]: Reloading.
Jan 31 01:46:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:46:55 np0005603541 systemd[1]: Reloading.
Jan 31 01:46:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:46:55 np0005603541 systemd[1]: Created slice Slice /system/ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:46:55 np0005603541 systemd[1]: Reached target System Time Set.
Jan 31 01:46:55 np0005603541 systemd[1]: Reached target System Time Synchronized.
Jan 31 01:46:55 np0005603541 systemd[1]: Starting Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:46:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:56 np0005603541 podman[73980]: 2026-01-31 06:46:56.126392667 +0000 UTC m=+0.034000814 container create 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67fd9c89c193fc34a22fa765e15f194931439920c1cb261574f321fc6e029f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67fd9c89c193fc34a22fa765e15f194931439920c1cb261574f321fc6e029f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67fd9c89c193fc34a22fa765e15f194931439920c1cb261574f321fc6e029f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67fd9c89c193fc34a22fa765e15f194931439920c1cb261574f321fc6e029f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 podman[73980]: 2026-01-31 06:46:56.179372585 +0000 UTC m=+0.086980742 container init 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:46:56 np0005603541 podman[73980]: 2026-01-31 06:46:56.183695681 +0000 UTC m=+0.091303818 container start 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 01:46:56 np0005603541 bash[73980]: 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae
Jan 31 01:46:56 np0005603541 podman[73980]: 2026-01-31 06:46:56.110650212 +0000 UTC m=+0.018258399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:56 np0005603541 systemd[1]: Started Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: pidfile_write: ignore empty --pid-file
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: load: jerasure load: lrc 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: RocksDB version: 7.9.2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Git sha 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: DB SUMMARY
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: DB Session ID:  JV7LO54C1SJKLBRWNI6G
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: CURRENT file:  CURRENT
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                         Options.error_if_exists: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.create_if_missing: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                                     Options.env: 0x5621b1ad8c40
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                                Options.info_log: 0x5621b42f8ec0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                              Options.statistics: (nil)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                               Options.use_fsync: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                              Options.db_log_dir: 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                                 Options.wal_dir: 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                    Options.write_buffer_manager: 0x5621b4308b40
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.unordered_write: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                               Options.row_cache: None
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                              Options.wal_filter: None
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.two_write_queues: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.wal_compression: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.atomic_flush: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.max_background_jobs: 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.max_background_compactions: -1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.max_subcompactions: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                          Options.max_open_files: -1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Compression algorithms supported:
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kZSTD supported: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kXpressCompression supported: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kZlibCompression supported: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:           Options.merge_operator: 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:        Options.compaction_filter: None
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621b42f8aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621b42f11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.compression: NoCompression
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.num_levels: 7
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22587319-adf7-48dc-8223-5e2f596ebaec
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842016227436, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842016229286, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "JV7LO54C1SJKLBRWNI6G", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842016229384, "job": 1, "event": "recovery_finished"}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5621b431ae00
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: DB pointer 0x5621b43a4000
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621b42f11f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@-1(???) e0 preinit fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.259699032 +0000 UTC m=+0.044770698 container create 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T06:46:54.858407Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).mds e1 new map
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mkfs ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:56 np0005603541 systemd[1]: Started libpod-conmon-4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8.scope.
Jan 31 01:46:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.237920889 +0000 UTC m=+0.022992565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca21956d61bab22781f7d5e683415909ead78d3226548e3c2975d4208abb9eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca21956d61bab22781f7d5e683415909ead78d3226548e3c2975d4208abb9eaf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca21956d61bab22781f7d5e683415909ead78d3226548e3c2975d4208abb9eaf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.362724086 +0000 UTC m=+0.147795752 container init 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.372929296 +0000 UTC m=+0.158000942 container start 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.3771717 +0000 UTC m=+0.162243366 container attach 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 01:46:56 np0005603541 ceph-mon[73999]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622930288' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:  cluster:
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    id:     ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    health: HEALTH_OK
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]: 
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:  services:
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    mon: 1 daemons, quorum compute-0 (age 0.522218s)
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    mgr: no daemons active
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    osd: 0 osds: 0 up, 0 in
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]: 
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:  data:
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    pools:   0 pools, 0 pgs
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    objects: 0 objects, 0 B
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    usage:   0 B used, 0 B / 0 B avail
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]:    pgs:     
Jan 31 01:46:56 np0005603541 agitated_rhodes[74055]: 
Jan 31 01:46:56 np0005603541 systemd[1]: libpod-4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8.scope: Deactivated successfully.
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.79972684 +0000 UTC m=+0.584798486 container died 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 01:46:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ca21956d61bab22781f7d5e683415909ead78d3226548e3c2975d4208abb9eaf-merged.mount: Deactivated successfully.
Jan 31 01:46:56 np0005603541 podman[74000]: 2026-01-31 06:46:56.833948369 +0000 UTC m=+0.619020015 container remove 4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8 (image=quay.io/ceph/ceph:v18, name=agitated_rhodes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 01:46:56 np0005603541 systemd[1]: libpod-conmon-4822582cb156222b5bb50d636e0a2410fb59a377411d787204da1bc4f48311d8.scope: Deactivated successfully.
Jan 31 01:46:56 np0005603541 podman[74092]: 2026-01-31 06:46:56.879568426 +0000 UTC m=+0.031656927 container create b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 01:46:56 np0005603541 systemd[1]: Started libpod-conmon-b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d.scope.
Jan 31 01:46:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae5234d68b185a89c8a622627d6dc8a30fd5740522fad3566f406737fc0f47c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae5234d68b185a89c8a622627d6dc8a30fd5740522fad3566f406737fc0f47c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae5234d68b185a89c8a622627d6dc8a30fd5740522fad3566f406737fc0f47c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae5234d68b185a89c8a622627d6dc8a30fd5740522fad3566f406737fc0f47c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:56 np0005603541 podman[74092]: 2026-01-31 06:46:56.946802383 +0000 UTC m=+0.098890914 container init b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:46:56 np0005603541 podman[74092]: 2026-01-31 06:46:56.954522052 +0000 UTC m=+0.106610543 container start b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:46:56 np0005603541 podman[74092]: 2026-01-31 06:46:56.957960766 +0000 UTC m=+0.110049297 container attach b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 01:46:56 np0005603541 podman[74092]: 2026-01-31 06:46:56.865274716 +0000 UTC m=+0.017363237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/498802827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/498802827' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 01:46:57 np0005603541 beautiful_blackburn[74108]: 
Jan 31 01:46:57 np0005603541 beautiful_blackburn[74108]: [global]
Jan 31 01:46:57 np0005603541 beautiful_blackburn[74108]: #011fsid = ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:57 np0005603541 beautiful_blackburn[74108]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: from='client.? 192.168.122.100:0/498802827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:46:57 np0005603541 systemd[1]: libpod-b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d.scope: Deactivated successfully.
Jan 31 01:46:57 np0005603541 podman[74092]: 2026-01-31 06:46:57.323371057 +0000 UTC m=+0.475459548 container died b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:46:57 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6ae5234d68b185a89c8a622627d6dc8a30fd5740522fad3566f406737fc0f47c-merged.mount: Deactivated successfully.
Jan 31 01:46:57 np0005603541 podman[74092]: 2026-01-31 06:46:57.365167601 +0000 UTC m=+0.517256102 container remove b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:57 np0005603541 systemd[1]: libpod-conmon-b702e614bc444356dd152365c52aceb799a900a91a50f38af832551aeac1341d.scope: Deactivated successfully.
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.413281619 +0000 UTC m=+0.033919692 container create 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:57 np0005603541 systemd[1]: Started libpod-conmon-7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3.scope.
Jan 31 01:46:57 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffd1e272754b33453669d20c1fa3673f800fad8a0112defccbf476443606199/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffd1e272754b33453669d20c1fa3673f800fad8a0112defccbf476443606199/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffd1e272754b33453669d20c1fa3673f800fad8a0112defccbf476443606199/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffd1e272754b33453669d20c1fa3673f800fad8a0112defccbf476443606199/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.491090675 +0000 UTC m=+0.111728758 container init 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.397638976 +0000 UTC m=+0.018277099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.498284161 +0000 UTC m=+0.118922234 container start 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.502858933 +0000 UTC m=+0.123497006 container attach 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:46:57 np0005603541 ceph-mon[73999]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947108142' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:46:57 np0005603541 systemd[1]: libpod-7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3.scope: Deactivated successfully.
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.915791148 +0000 UTC m=+0.536429241 container died 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:46:57 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5ffd1e272754b33453669d20c1fa3673f800fad8a0112defccbf476443606199-merged.mount: Deactivated successfully.
Jan 31 01:46:57 np0005603541 podman[74145]: 2026-01-31 06:46:57.967973386 +0000 UTC m=+0.588611459 container remove 7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3 (image=quay.io/ceph/ceph:v18, name=keen_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:46:57 np0005603541 systemd[1]: libpod-conmon-7575a0a60904d2877cfb2fbc8e10b56d1a310c66e9f2defff41c30e909008dd3.scope: Deactivated successfully.
Jan 31 01:46:57 np0005603541 systemd[1]: Stopping Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:46:58 np0005603541 ceph-mon[73999]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 01:46:58 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 01:46:58 np0005603541 ceph-mon[73999]: mon.compute-0@0(leader) e1 shutdown
Jan 31 01:46:58 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0[73995]: 2026-01-31T06:46:58.131+0000 7f56bd839640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 01:46:58 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0[73995]: 2026-01-31T06:46:58.131+0000 7f56bd839640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 01:46:58 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 01:46:58 np0005603541 ceph-mon[73999]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 01:46:58 np0005603541 podman[74232]: 2026-01-31 06:46:58.295030418 +0000 UTC m=+0.193594313 container died 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:58 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9b67fd9c89c193fc34a22fa765e15f194931439920c1cb261574f321fc6e029f-merged.mount: Deactivated successfully.
Jan 31 01:46:58 np0005603541 podman[74232]: 2026-01-31 06:46:58.328142479 +0000 UTC m=+0.226706404 container remove 9e79f06bf755abf8b48801ffbdd26b571f56b2d1841fb83ae0ab782e69edddae (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 01:46:58 np0005603541 bash[74232]: ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0
Jan 31 01:46:58 np0005603541 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 01:46:58 np0005603541 systemd[1]: ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@mon.compute-0.service: Deactivated successfully.
Jan 31 01:46:58 np0005603541 systemd[1]: Stopped Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:46:58 np0005603541 systemd[1]: Starting Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:46:58 np0005603541 podman[74335]: 2026-01-31 06:46:58.591685145 +0000 UTC m=+0.033927832 container create ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25518c6609b1da49f3c248f623828003e8e3b0b616217df252491540fb0a1a2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25518c6609b1da49f3c248f623828003e8e3b0b616217df252491540fb0a1a2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25518c6609b1da49f3c248f623828003e8e3b0b616217df252491540fb0a1a2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25518c6609b1da49f3c248f623828003e8e3b0b616217df252491540fb0a1a2b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 podman[74335]: 2026-01-31 06:46:58.640433299 +0000 UTC m=+0.082675996 container init ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:46:58 np0005603541 podman[74335]: 2026-01-31 06:46:58.646152699 +0000 UTC m=+0.088395386 container start ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 01:46:58 np0005603541 bash[74335]: ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d
Jan 31 01:46:58 np0005603541 podman[74335]: 2026-01-31 06:46:58.576937013 +0000 UTC m=+0.019179720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:58 np0005603541 systemd[1]: Started Ceph mon.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: pidfile_write: ignore empty --pid-file
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: load: jerasure load: lrc 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: RocksDB version: 7.9.2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Git sha 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: DB SUMMARY
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: DB Session ID:  F9FZJBU69XSJM19R5DYZ
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: CURRENT file:  CURRENT
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52696 ; 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                         Options.error_if_exists: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.create_if_missing: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                                     Options.env: 0x561558b46c40
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                                Options.info_log: 0x56155a007040
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                              Options.statistics: (nil)
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                               Options.use_fsync: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                              Options.db_log_dir: 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                                 Options.wal_dir: 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                    Options.write_buffer_manager: 0x56155a016b40
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.unordered_write: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                               Options.row_cache: None
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                              Options.wal_filter: None
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.two_write_queues: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.wal_compression: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.atomic_flush: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.max_background_jobs: 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.max_background_compactions: -1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.max_subcompactions: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                          Options.max_open_files: -1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Compression algorithms supported:
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kZSTD supported: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kXpressCompression supported: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kZlibCompression supported: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:           Options.merge_operator: 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:        Options.compaction_filter: None
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56155a006c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561559fff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.compression: NoCompression
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.num_levels: 7
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22587319-adf7-48dc-8223-5e2f596ebaec
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842018686298, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842018689014, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 52460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 128, "table_properties": {"data_size": 51013, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2850, "raw_average_key_size": 30, "raw_value_size": 48732, "raw_average_value_size": 512, "num_data_blocks": 7, "num_entries": 95, "num_filter_entries": 95, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842018, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842018689109, "job": 1, "event": "recovery_finished"}
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56155a028e00
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: DB pointer 0x56155a130000
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   53.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   53.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561559fff1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???) e1 preinit fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).mds e1 new map
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 01:46:58 np0005603541 podman[74356]: 2026-01-31 06:46:58.726608169 +0000 UTC m=+0.038785980 container create 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 31 01:46:58 np0005603541 systemd[1]: Started libpod-conmon-2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c.scope.
Jan 31 01:46:58 np0005603541 ceph-mon[74355]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 01:46:58 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732755b99c2a945bf7fccc754fff20e57fe11fa1f7968a21bca0f0f761a12113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732755b99c2a945bf7fccc754fff20e57fe11fa1f7968a21bca0f0f761a12113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732755b99c2a945bf7fccc754fff20e57fe11fa1f7968a21bca0f0f761a12113/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:58 np0005603541 podman[74356]: 2026-01-31 06:46:58.807368568 +0000 UTC m=+0.119546409 container init 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:46:58 np0005603541 podman[74356]: 2026-01-31 06:46:58.713846767 +0000 UTC m=+0.026024598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:58 np0005603541 podman[74356]: 2026-01-31 06:46:58.814564654 +0000 UTC m=+0.126742465 container start 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:46:58 np0005603541 podman[74356]: 2026-01-31 06:46:58.821974445 +0000 UTC m=+0.134152296 container attach 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 31 01:46:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 31 01:46:59 np0005603541 systemd[1]: libpod-2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c.scope: Deactivated successfully.
Jan 31 01:46:59 np0005603541 podman[74356]: 2026-01-31 06:46:59.217224707 +0000 UTC m=+0.529402518 container died 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:46:59 np0005603541 systemd[1]: var-lib-containers-storage-overlay-732755b99c2a945bf7fccc754fff20e57fe11fa1f7968a21bca0f0f761a12113-merged.mount: Deactivated successfully.
Jan 31 01:46:59 np0005603541 podman[74356]: 2026-01-31 06:46:59.614279392 +0000 UTC m=+0.926457203 container remove 2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c (image=quay.io/ceph/ceph:v18, name=great_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:46:59 np0005603541 podman[74449]: 2026-01-31 06:46:59.662226377 +0000 UTC m=+0.025408484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:46:59 np0005603541 podman[74449]: 2026-01-31 06:46:59.848075919 +0000 UTC m=+0.211258016 container create 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 01:46:59 np0005603541 systemd[1]: Started libpod-conmon-8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b.scope.
Jan 31 01:46:59 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:46:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e89adca19bcb6cdf42c7e958183a8b4c1c4b75ca6de2837e61b8560a8733405/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e89adca19bcb6cdf42c7e958183a8b4c1c4b75ca6de2837e61b8560a8733405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:46:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e89adca19bcb6cdf42c7e958183a8b4c1c4b75ca6de2837e61b8560a8733405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:00 np0005603541 podman[74449]: 2026-01-31 06:47:00.043363333 +0000 UTC m=+0.406545460 container init 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:00 np0005603541 podman[74449]: 2026-01-31 06:47:00.049614636 +0000 UTC m=+0.412796743 container start 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:00 np0005603541 podman[74449]: 2026-01-31 06:47:00.225447323 +0000 UTC m=+0.588629460 container attach 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:00 np0005603541 systemd[1]: libpod-conmon-2c21efc79c6c6416956e8af59ad0b7a25e170fde73c024f67f14950c0dd11b2c.scope: Deactivated successfully.
Jan 31 01:47:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 31 01:47:00 np0005603541 systemd[1]: libpod-8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b.scope: Deactivated successfully.
Jan 31 01:47:00 np0005603541 podman[74449]: 2026-01-31 06:47:00.483099385 +0000 UTC m=+0.846281522 container died 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1e89adca19bcb6cdf42c7e958183a8b4c1c4b75ca6de2837e61b8560a8733405-merged.mount: Deactivated successfully.
Jan 31 01:47:00 np0005603541 podman[74449]: 2026-01-31 06:47:00.77603876 +0000 UTC m=+1.139220887 container remove 8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b (image=quay.io/ceph/ceph:v18, name=nice_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:47:00 np0005603541 systemd[1]: libpod-conmon-8a996fec7f66b7bfbadd605f9a66c9e01b349965eb4ca473130c0e8c9f5f542b.scope: Deactivated successfully.
Jan 31 01:47:01 np0005603541 systemd[1]: Reloading.
Jan 31 01:47:01 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:47:01 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:47:01 np0005603541 systemd[1]: Reloading.
Jan 31 01:47:01 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:47:01 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:47:01 np0005603541 systemd[1]: Starting Ceph mgr.compute-0.gghdjs for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:47:01 np0005603541 podman[74629]: 2026-01-31 06:47:01.675741458 +0000 UTC m=+0.029783531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:01 np0005603541 podman[74629]: 2026-01-31 06:47:01.892144439 +0000 UTC m=+0.246186462 container create d0a9f48927944a35d9d8a7db6f87472ef65987afa50ecc386229e16b527f7697 (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3654a7db7c8c1c38a32b3c18bf5df87ec9ef5ab8e05175c24dbf26b0e7b0dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3654a7db7c8c1c38a32b3c18bf5df87ec9ef5ab8e05175c24dbf26b0e7b0dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3654a7db7c8c1c38a32b3c18bf5df87ec9ef5ab8e05175c24dbf26b0e7b0dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3654a7db7c8c1c38a32b3c18bf5df87ec9ef5ab8e05175c24dbf26b0e7b0dd/merged/var/lib/ceph/mgr/ceph-compute-0.gghdjs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 podman[74629]: 2026-01-31 06:47:02.216051602 +0000 UTC m=+0.570093645 container init d0a9f48927944a35d9d8a7db6f87472ef65987afa50ecc386229e16b527f7697 (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 01:47:02 np0005603541 podman[74629]: 2026-01-31 06:47:02.223965416 +0000 UTC m=+0.578007439 container start d0a9f48927944a35d9d8a7db6f87472ef65987afa50ecc386229e16b527f7697 (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 01:47:02 np0005603541 bash[74629]: d0a9f48927944a35d9d8a7db6f87472ef65987afa50ecc386229e16b527f7697
Jan 31 01:47:02 np0005603541 systemd[1]: Started Ceph mgr.compute-0.gghdjs for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: pidfile_write: ignore empty --pid-file
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'alerts'
Jan 31 01:47:02 np0005603541 podman[74673]: 2026-01-31 06:47:02.343139246 +0000 UTC m=+0.025750952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:02 np0005603541 podman[74673]: 2026-01-31 06:47:02.578743267 +0000 UTC m=+0.261354943 container create 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:47:02 np0005603541 systemd[1]: Started libpod-conmon-0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04.scope.
Jan 31 01:47:02 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4babed86bb3a4774727835ec74198409b399593ef2fac03769e0f4aabc96dfd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4babed86bb3a4774727835ec74198409b399593ef2fac03769e0f4aabc96dfd3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4babed86bb3a4774727835ec74198409b399593ef2fac03769e0f4aabc96dfd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'balancer'
Jan 31 01:47:02 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:02.685+0000 7f09985c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 01:47:02 np0005603541 podman[74673]: 2026-01-31 06:47:02.764886736 +0000 UTC m=+0.447498442 container init 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 01:47:02 np0005603541 podman[74673]: 2026-01-31 06:47:02.770781511 +0000 UTC m=+0.453393207 container start 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:47:02 np0005603541 podman[74673]: 2026-01-31 06:47:02.805438299 +0000 UTC m=+0.488050015 container attach 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 01:47:02 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'cephadm'
Jan 31 01:47:02 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:02.940+0000 7f09985c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 01:47:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627615954' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:03 np0005603541 confident_kirch[74689]: 
Jan 31 01:47:03 np0005603541 confident_kirch[74689]: {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "health": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "status": "HEALTH_OK",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "checks": {},
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "mutes": []
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "election_epoch": 5,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "quorum": [
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        0
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    ],
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "quorum_names": [
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "compute-0"
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    ],
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "quorum_age": 4,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "monmap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "epoch": 1,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "min_mon_release_name": "reef",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_mons": 1
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "osdmap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "epoch": 1,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_osds": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_up_osds": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "osd_up_since": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_in_osds": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "osd_in_since": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_remapped_pgs": 0
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "pgmap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "pgs_by_state": [],
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_pgs": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_pools": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_objects": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "data_bytes": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "bytes_used": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "bytes_avail": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "bytes_total": 0
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "fsmap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "epoch": 1,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "by_rank": [],
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "up:standby": 0
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "mgrmap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "available": false,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "num_standbys": 0,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "modules": [
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:            "iostat",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:            "nfs",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:            "restful"
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        ],
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "services": {}
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "servicemap": {
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "epoch": 1,
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:        "services": {}
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    },
Jan 31 01:47:03 np0005603541 confident_kirch[74689]:    "progress_events": {}
Jan 31 01:47:03 np0005603541 confident_kirch[74689]: }
Jan 31 01:47:03 np0005603541 systemd[1]: libpod-0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04.scope: Deactivated successfully.
Jan 31 01:47:03 np0005603541 podman[74673]: 2026-01-31 06:47:03.141325907 +0000 UTC m=+0.823937593 container died 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:47:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4babed86bb3a4774727835ec74198409b399593ef2fac03769e0f4aabc96dfd3-merged.mount: Deactivated successfully.
Jan 31 01:47:03 np0005603541 podman[74673]: 2026-01-31 06:47:03.461195214 +0000 UTC m=+1.143806900 container remove 0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04 (image=quay.io/ceph/ceph:v18, name=confident_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:03 np0005603541 systemd[1]: libpod-conmon-0c58d28e68eeef7bc59ee07c017ff9776a727bbdf03c01c015f2352cfb8c8b04.scope: Deactivated successfully.
Jan 31 01:47:05 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'crash'
Jan 31 01:47:05 np0005603541 ceph-mgr[74648]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 01:47:05 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'dashboard'
Jan 31 01:47:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:05.326+0000 7f09985c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 01:47:05 np0005603541 podman[74737]: 2026-01-31 06:47:05.564350524 +0000 UTC m=+0.082063766 container create f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:47:05 np0005603541 podman[74737]: 2026-01-31 06:47:05.50387021 +0000 UTC m=+0.021583472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:05 np0005603541 systemd[1]: Started libpod-conmon-f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37.scope.
Jan 31 01:47:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6f9249c158d6aaf43b57d1475c35ddbf2fbb26d6abc3e6f40764450b50225/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6f9249c158d6aaf43b57d1475c35ddbf2fbb26d6abc3e6f40764450b50225/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6f9249c158d6aaf43b57d1475c35ddbf2fbb26d6abc3e6f40764450b50225/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:05 np0005603541 podman[74737]: 2026-01-31 06:47:05.733627758 +0000 UTC m=+0.251341050 container init f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 01:47:05 np0005603541 podman[74737]: 2026-01-31 06:47:05.738574789 +0000 UTC m=+0.256288051 container start f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:47:05 np0005603541 podman[74737]: 2026-01-31 06:47:05.8038019 +0000 UTC m=+0.321515162 container attach f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:47:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096070968' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:06 np0005603541 cranky_borg[74753]: 
Jan 31 01:47:06 np0005603541 cranky_borg[74753]: {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "health": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "status": "HEALTH_OK",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "checks": {},
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "mutes": []
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "election_epoch": 5,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "quorum": [
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        0
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    ],
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "quorum_names": [
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "compute-0"
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    ],
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "quorum_age": 7,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "monmap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "epoch": 1,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "min_mon_release_name": "reef",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_mons": 1
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "osdmap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "epoch": 1,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_osds": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_up_osds": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "osd_up_since": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_in_osds": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "osd_in_since": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_remapped_pgs": 0
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "pgmap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "pgs_by_state": [],
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_pgs": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_pools": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_objects": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "data_bytes": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "bytes_used": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "bytes_avail": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "bytes_total": 0
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "fsmap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "epoch": 1,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "by_rank": [],
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "up:standby": 0
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "mgrmap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "available": false,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "num_standbys": 0,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "modules": [
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:            "iostat",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:            "nfs",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:            "restful"
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        ],
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "services": {}
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "servicemap": {
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "epoch": 1,
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:        "services": {}
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    },
Jan 31 01:47:06 np0005603541 cranky_borg[74753]:    "progress_events": {}
Jan 31 01:47:06 np0005603541 cranky_borg[74753]: }
Jan 31 01:47:06 np0005603541 systemd[1]: libpod-f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37.scope: Deactivated successfully.
Jan 31 01:47:06 np0005603541 podman[74737]: 2026-01-31 06:47:06.13226451 +0000 UTC m=+0.649977752 container died f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:06 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7bb6f9249c158d6aaf43b57d1475c35ddbf2fbb26d6abc3e6f40764450b50225-merged.mount: Deactivated successfully.
Jan 31 01:47:06 np0005603541 podman[74737]: 2026-01-31 06:47:06.168432807 +0000 UTC m=+0.686146049 container remove f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37 (image=quay.io/ceph/ceph:v18, name=cranky_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 01:47:06 np0005603541 systemd[1]: libpod-conmon-f1f14c014aec1616a226ae7769d6b852f6a53db09e3cff492eeb2affba192d37.scope: Deactivated successfully.
Jan 31 01:47:06 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'devicehealth'
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:07.037+0000 7f09985c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  from numpy import show_config as show_numpy_config
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'influx'
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:07.613+0000 7f09985c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 01:47:07 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'insights'
Jan 31 01:47:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:07.901+0000 7f09985c8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 01:47:08 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'iostat'
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.227739751 +0000 UTC m=+0.040550016 container create 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:47:08 np0005603541 systemd[1]: Started libpod-conmon-78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50.scope.
Jan 31 01:47:08 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:08 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffb68d43710a9f319fdceeb0dd266b9e7b0a6d62e28a8dc043b3a19e641d2ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:08 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffb68d43710a9f319fdceeb0dd266b9e7b0a6d62e28a8dc043b3a19e641d2ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:08 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffb68d43710a9f319fdceeb0dd266b9e7b0a6d62e28a8dc043b3a19e641d2ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.298185269 +0000 UTC m=+0.110995534 container init 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.302690161 +0000 UTC m=+0.115500436 container start 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.211357339 +0000 UTC m=+0.024167604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.312663075 +0000 UTC m=+0.125473340 container attach 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 01:47:08 np0005603541 ceph-mgr[74648]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 01:47:08 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'k8sevents'
Jan 31 01:47:08 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:08.471+0000 7f09985c8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 01:47:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841102169' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]: 
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]: {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "health": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "status": "HEALTH_OK",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "checks": {},
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "mutes": []
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "election_epoch": 5,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "quorum": [
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        0
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    ],
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "quorum_names": [
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "compute-0"
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    ],
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "quorum_age": 10,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "monmap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "epoch": 1,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "min_mon_release_name": "reef",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_mons": 1
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "osdmap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "epoch": 1,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_osds": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_up_osds": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "osd_up_since": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_in_osds": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "osd_in_since": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_remapped_pgs": 0
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "pgmap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "pgs_by_state": [],
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_pgs": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_pools": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_objects": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "data_bytes": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "bytes_used": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "bytes_avail": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "bytes_total": 0
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "fsmap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "epoch": 1,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "by_rank": [],
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "up:standby": 0
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "mgrmap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "available": false,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "num_standbys": 0,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "modules": [
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:            "iostat",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:            "nfs",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:            "restful"
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        ],
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "services": {}
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "servicemap": {
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "epoch": 1,
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:        "services": {}
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    },
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]:    "progress_events": {}
Jan 31 01:47:08 np0005603541 peaceful_elgamal[74810]: }
Jan 31 01:47:08 np0005603541 systemd[1]: libpod-78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50.scope: Deactivated successfully.
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.734946567 +0000 UTC m=+0.547756832 container died 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:08 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6ffb68d43710a9f319fdceeb0dd266b9e7b0a6d62e28a8dc043b3a19e641d2ed-merged.mount: Deactivated successfully.
Jan 31 01:47:08 np0005603541 podman[74794]: 2026-01-31 06:47:08.784070843 +0000 UTC m=+0.596881108 container remove 78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50 (image=quay.io/ceph/ceph:v18, name=peaceful_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:47:08 np0005603541 systemd[1]: libpod-conmon-78605a21e9fad2c27d329c13c8414f2c4180aebcba29044fb93477a766305b50.scope: Deactivated successfully.
Jan 31 01:47:10 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'localpool'
Jan 31 01:47:10 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 01:47:10 np0005603541 podman[74846]: 2026-01-31 06:47:10.865428508 +0000 UTC m=+0.058625950 container create bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:47:10 np0005603541 systemd[1]: Started libpod-conmon-bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1.scope.
Jan 31 01:47:10 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:10 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d4c0dfbe0f161958d3369b305bf89d06f99a9ea40adf2bc72dcbc83e105bfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:10 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d4c0dfbe0f161958d3369b305bf89d06f99a9ea40adf2bc72dcbc83e105bfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:10 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d4c0dfbe0f161958d3369b305bf89d06f99a9ea40adf2bc72dcbc83e105bfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:10 np0005603541 podman[74846]: 2026-01-31 06:47:10.839078411 +0000 UTC m=+0.032275943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:10 np0005603541 podman[74846]: 2026-01-31 06:47:10.935165419 +0000 UTC m=+0.128362881 container init bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:47:10 np0005603541 podman[74846]: 2026-01-31 06:47:10.943216397 +0000 UTC m=+0.136413879 container start bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:47:10 np0005603541 podman[74846]: 2026-01-31 06:47:10.950553017 +0000 UTC m=+0.143750459 container attach bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:47:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64066282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]: 
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]: {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "health": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "status": "HEALTH_OK",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "checks": {},
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "mutes": []
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "election_epoch": 5,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "quorum": [
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        0
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    ],
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "quorum_names": [
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "compute-0"
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    ],
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "quorum_age": 12,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "monmap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "epoch": 1,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "min_mon_release_name": "reef",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_mons": 1
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "osdmap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "epoch": 1,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_osds": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_up_osds": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "osd_up_since": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_in_osds": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "osd_in_since": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_remapped_pgs": 0
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "pgmap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "pgs_by_state": [],
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_pgs": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_pools": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_objects": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "data_bytes": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "bytes_used": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "bytes_avail": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "bytes_total": 0
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "fsmap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "epoch": 1,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "by_rank": [],
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "up:standby": 0
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "mgrmap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "available": false,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "num_standbys": 0,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "modules": [
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:            "iostat",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:            "nfs",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:            "restful"
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        ],
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "services": {}
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "servicemap": {
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "epoch": 1,
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:        "services": {}
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    },
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]:    "progress_events": {}
Jan 31 01:47:11 np0005603541 gifted_gagarin[74862]: }
Jan 31 01:47:11 np0005603541 systemd[1]: libpod-bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1.scope: Deactivated successfully.
Jan 31 01:47:11 np0005603541 podman[74846]: 2026-01-31 06:47:11.324849132 +0000 UTC m=+0.518046574 container died bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:47:11 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a5d4c0dfbe0f161958d3369b305bf89d06f99a9ea40adf2bc72dcbc83e105bfa-merged.mount: Deactivated successfully.
Jan 31 01:47:11 np0005603541 podman[74846]: 2026-01-31 06:47:11.378776846 +0000 UTC m=+0.571974288 container remove bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1 (image=quay.io/ceph/ceph:v18, name=gifted_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:11 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'mirroring'
Jan 31 01:47:11 np0005603541 systemd[1]: libpod-conmon-bc4ee2e9a7f7cc4dfa5ad596d19735597ecbac22c05db3aeb2d89fa83cfd83a1.scope: Deactivated successfully.
Jan 31 01:47:11 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'nfs'
Jan 31 01:47:12 np0005603541 ceph-mgr[74648]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 01:47:12 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'orchestrator'
Jan 31 01:47:12 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:12.369+0000 7f09985c8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 01:47:13 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:13.025+0000 7f09985c8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'osd_support'
Jan 31 01:47:13 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:13.311+0000 7f09985c8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.429372695 +0000 UTC m=+0.032048258 container create b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:13 np0005603541 systemd[1]: Started libpod-conmon-b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c.scope.
Jan 31 01:47:13 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb83fb2ec92a2dc41468bf920b38d005ef1e85768045515fa38db193c5304d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb83fb2ec92a2dc41468bf920b38d005ef1e85768045515fa38db193c5304d44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb83fb2ec92a2dc41468bf920b38d005ef1e85768045515fa38db193c5304d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.500861839 +0000 UTC m=+0.103537442 container init b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.504895638 +0000 UTC m=+0.107571211 container start b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.508519768 +0000 UTC m=+0.111195341 container attach b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.415234288 +0000 UTC m=+0.017909871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 01:47:13 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:13.592+0000 7f09985c8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486996954' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]: 
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]: {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "health": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "status": "HEALTH_OK",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "checks": {},
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "mutes": []
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "election_epoch": 5,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "quorum": [
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        0
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    ],
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "quorum_names": [
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "compute-0"
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    ],
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "quorum_age": 15,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "monmap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "epoch": 1,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "min_mon_release_name": "reef",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_mons": 1
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "osdmap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "epoch": 1,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_osds": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_up_osds": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "osd_up_since": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_in_osds": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "osd_in_since": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_remapped_pgs": 0
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "pgmap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "pgs_by_state": [],
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_pgs": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_pools": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_objects": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "data_bytes": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "bytes_used": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "bytes_avail": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "bytes_total": 0
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "fsmap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "epoch": 1,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "by_rank": [],
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "up:standby": 0
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "mgrmap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "available": false,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "num_standbys": 0,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "modules": [
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:            "iostat",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:            "nfs",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:            "restful"
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        ],
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "services": {}
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "servicemap": {
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "epoch": 1,
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:        "services": {}
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    },
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]:    "progress_events": {}
Jan 31 01:47:13 np0005603541 angry_chandrasekhar[74917]: }
Jan 31 01:47:13 np0005603541 systemd[1]: libpod-b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c.scope: Deactivated successfully.
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.877841841 +0000 UTC m=+0.480517414 container died b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 01:47:13 np0005603541 systemd[1]: var-lib-containers-storage-overlay-bb83fb2ec92a2dc41468bf920b38d005ef1e85768045515fa38db193c5304d44-merged.mount: Deactivated successfully.
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'progress'
Jan 31 01:47:13 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:13.911+0000 7f09985c8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 01:47:13 np0005603541 podman[74901]: 2026-01-31 06:47:13.927750945 +0000 UTC m=+0.530426518 container remove b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c (image=quay.io/ceph/ceph:v18, name=angry_chandrasekhar, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 01:47:13 np0005603541 systemd[1]: libpod-conmon-b9af0c6364d3a91b2bbb0195d1bddf43ba836dbd69246f0bd21aed8c48ae8e2c.scope: Deactivated successfully.
Jan 31 01:47:14 np0005603541 ceph-mgr[74648]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 01:47:14 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'prometheus'
Jan 31 01:47:14 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:14.182+0000 7f09985c8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 01:47:15 np0005603541 ceph-mgr[74648]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 01:47:15 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rbd_support'
Jan 31 01:47:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:15.306+0000 7f09985c8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 01:47:15 np0005603541 ceph-mgr[74648]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 01:47:15 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'restful'
Jan 31 01:47:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:15.631+0000 7f09985c8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 01:47:15 np0005603541 podman[74955]: 2026-01-31 06:47:15.980207551 +0000 UTC m=+0.034765085 container create fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 01:47:16 np0005603541 systemd[1]: Started libpod-conmon-fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9.scope.
Jan 31 01:47:16 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2df59b92bba0858c0a6a3ec2e5920f3a64f5d4f2fe13674df93907c0ed1033/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2df59b92bba0858c0a6a3ec2e5920f3a64f5d4f2fe13674df93907c0ed1033/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2df59b92bba0858c0a6a3ec2e5920f3a64f5d4f2fe13674df93907c0ed1033/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:16.040242984 +0000 UTC m=+0.094800518 container init fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:16.045052432 +0000 UTC m=+0.099609966 container start fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:16.048268181 +0000 UTC m=+0.102825735 container attach fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:15.96388916 +0000 UTC m=+0.018446714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318470386' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:16 np0005603541 sad_williams[74971]: 
Jan 31 01:47:16 np0005603541 sad_williams[74971]: {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "health": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "status": "HEALTH_OK",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "checks": {},
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "mutes": []
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "election_epoch": 5,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "quorum": [
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        0
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    ],
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "quorum_names": [
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "compute-0"
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    ],
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "quorum_age": 17,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "monmap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "epoch": 1,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "min_mon_release_name": "reef",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_mons": 1
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "osdmap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "epoch": 1,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_osds": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_up_osds": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "osd_up_since": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_in_osds": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "osd_in_since": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_remapped_pgs": 0
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "pgmap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "pgs_by_state": [],
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_pgs": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_pools": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_objects": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "data_bytes": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "bytes_used": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "bytes_avail": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "bytes_total": 0
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "fsmap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "epoch": 1,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "by_rank": [],
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "up:standby": 0
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "mgrmap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "available": false,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "num_standbys": 0,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "modules": [
Jan 31 01:47:16 np0005603541 sad_williams[74971]:            "iostat",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:            "nfs",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:            "restful"
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        ],
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "services": {}
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "servicemap": {
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "epoch": 1,
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:16 np0005603541 sad_williams[74971]:        "services": {}
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    },
Jan 31 01:47:16 np0005603541 sad_williams[74971]:    "progress_events": {}
Jan 31 01:47:16 np0005603541 sad_williams[74971]: }
Jan 31 01:47:16 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rgw'
Jan 31 01:47:16 np0005603541 systemd[1]: libpod-fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9.scope: Deactivated successfully.
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:16.432325365 +0000 UTC m=+0.486882899 container died fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9b2df59b92bba0858c0a6a3ec2e5920f3a64f5d4f2fe13674df93907c0ed1033-merged.mount: Deactivated successfully.
Jan 31 01:47:16 np0005603541 podman[74955]: 2026-01-31 06:47:16.58248546 +0000 UTC m=+0.637042994 container remove fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9 (image=quay.io/ceph/ceph:v18, name=sad_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:47:16 np0005603541 systemd[1]: libpod-conmon-fe167c4a12fa1366dea8e958a23748d404b84323e7c029227579ec7cd1e240e9.scope: Deactivated successfully.
Jan 31 01:47:17 np0005603541 ceph-mgr[74648]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 01:47:17 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rook'
Jan 31 01:47:17 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:17.206+0000 7f09985c8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 01:47:18 np0005603541 podman[75010]: 2026-01-31 06:47:18.717210795 +0000 UTC m=+0.111610420 container create a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:18 np0005603541 podman[75010]: 2026-01-31 06:47:18.634328501 +0000 UTC m=+0.028728156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:18 np0005603541 systemd[1]: Started libpod-conmon-a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c.scope.
Jan 31 01:47:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac96e35d1c8eec880eea5e4d880fd1f760c0cd27b3a2f88d272237cb4c63ec5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac96e35d1c8eec880eea5e4d880fd1f760c0cd27b3a2f88d272237cb4c63ec5a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac96e35d1c8eec880eea5e4d880fd1f760c0cd27b3a2f88d272237cb4c63ec5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:18 np0005603541 podman[75010]: 2026-01-31 06:47:18.902290686 +0000 UTC m=+0.296690341 container init a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:18 np0005603541 podman[75010]: 2026-01-31 06:47:18.90650639 +0000 UTC m=+0.300906025 container start a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 01:47:18 np0005603541 podman[75010]: 2026-01-31 06:47:18.997134034 +0000 UTC m=+0.391533669 container attach a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'selftest'
Jan 31 01:47:19 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:19.313+0000 7f09985c8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318554494' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:19 np0005603541 objective_yalow[75026]: 
Jan 31 01:47:19 np0005603541 objective_yalow[75026]: {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "health": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "status": "HEALTH_OK",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "checks": {},
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "mutes": []
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "election_epoch": 5,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "quorum": [
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        0
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    ],
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "quorum_names": [
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "compute-0"
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    ],
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "quorum_age": 20,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "monmap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "epoch": 1,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "min_mon_release_name": "reef",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_mons": 1
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "osdmap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "epoch": 1,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_osds": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_up_osds": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "osd_up_since": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_in_osds": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "osd_in_since": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_remapped_pgs": 0
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "pgmap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "pgs_by_state": [],
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_pgs": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_pools": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_objects": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "data_bytes": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "bytes_used": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "bytes_avail": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "bytes_total": 0
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "fsmap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "epoch": 1,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "by_rank": [],
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "up:standby": 0
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "mgrmap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "available": false,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "num_standbys": 0,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "modules": [
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:            "iostat",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:            "nfs",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:            "restful"
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        ],
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "services": {}
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "servicemap": {
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "epoch": 1,
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:        "services": {}
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    },
Jan 31 01:47:19 np0005603541 objective_yalow[75026]:    "progress_events": {}
Jan 31 01:47:19 np0005603541 objective_yalow[75026]: }
Jan 31 01:47:19 np0005603541 systemd[1]: libpod-a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c.scope: Deactivated successfully.
Jan 31 01:47:19 np0005603541 podman[75052]: 2026-01-31 06:47:19.368295362 +0000 UTC m=+0.021205071 container died a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'snap_schedule'
Jan 31 01:47:19 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:19.567+0000 7f09985c8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ac96e35d1c8eec880eea5e4d880fd1f760c0cd27b3a2f88d272237cb4c63ec5a-merged.mount: Deactivated successfully.
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'stats'
Jan 31 01:47:19 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:19.817+0000 7f09985c8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 01:47:19 np0005603541 podman[75052]: 2026-01-31 06:47:19.891407638 +0000 UTC m=+0.544317347 container remove a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c (image=quay.io/ceph/ceph:v18, name=objective_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:47:19 np0005603541 systemd[1]: libpod-conmon-a78d0f099e56ffcda9a3da05b26219294efc3a508c60ee45f2641d5e5f08895c.scope: Deactivated successfully.
Jan 31 01:47:20 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'status'
Jan 31 01:47:20 np0005603541 ceph-mgr[74648]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 01:47:20 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'telegraf'
Jan 31 01:47:20 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:20.332+0000 7f09985c8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 01:47:20 np0005603541 ceph-mgr[74648]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 01:47:20 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'telemetry'
Jan 31 01:47:20 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:20.593+0000 7f09985c8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 01:47:21 np0005603541 ceph-mgr[74648]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 01:47:21 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 01:47:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:21.226+0000 7f09985c8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 01:47:21 np0005603541 ceph-mgr[74648]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:21 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'volumes'
Jan 31 01:47:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:21.945+0000 7f09985c8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:21 np0005603541 podman[75068]: 2026-01-31 06:47:21.952142427 +0000 UTC m=+0.039161162 container create 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 01:47:21 np0005603541 systemd[1]: Started libpod-conmon-02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645.scope.
Jan 31 01:47:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f72eeaa9c273b3a1ddc3423d3c3a03aa4a642bbca5adb229f4ae964ab8c5a79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f72eeaa9c273b3a1ddc3423d3c3a03aa4a642bbca5adb229f4ae964ab8c5a79/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f72eeaa9c273b3a1ddc3423d3c3a03aa4a642bbca5adb229f4ae964ab8c5a79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:22.022110744 +0000 UTC m=+0.109129489 container init 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:22.026143013 +0000 UTC m=+0.113161748 container start 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:22.02965274 +0000 UTC m=+0.116671505 container attach 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:21.933940141 +0000 UTC m=+0.020958906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040976561' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:22 np0005603541 happy_meitner[75084]: 
Jan 31 01:47:22 np0005603541 happy_meitner[75084]: {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "health": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "status": "HEALTH_OK",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "checks": {},
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "mutes": []
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "election_epoch": 5,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "quorum": [
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        0
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    ],
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "quorum_names": [
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "compute-0"
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    ],
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "quorum_age": 23,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "monmap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "epoch": 1,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "min_mon_release_name": "reef",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_mons": 1
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "osdmap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "epoch": 1,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_osds": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_up_osds": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "osd_up_since": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_in_osds": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "osd_in_since": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_remapped_pgs": 0
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "pgmap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "pgs_by_state": [],
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_pgs": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_pools": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_objects": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "data_bytes": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "bytes_used": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "bytes_avail": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "bytes_total": 0
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "fsmap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "epoch": 1,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "by_rank": [],
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "up:standby": 0
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "mgrmap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "available": false,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "num_standbys": 0,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "modules": [
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:            "iostat",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:            "nfs",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:            "restful"
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        ],
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "services": {}
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "servicemap": {
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "epoch": 1,
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:        "services": {}
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    },
Jan 31 01:47:22 np0005603541 happy_meitner[75084]:    "progress_events": {}
Jan 31 01:47:22 np0005603541 happy_meitner[75084]: }
Jan 31 01:47:22 np0005603541 systemd[1]: libpod-02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645.scope: Deactivated successfully.
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:22.399551397 +0000 UTC m=+0.486570132 container died 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:22 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8f72eeaa9c273b3a1ddc3423d3c3a03aa4a642bbca5adb229f4ae964ab8c5a79-merged.mount: Deactivated successfully.
Jan 31 01:47:22 np0005603541 podman[75068]: 2026-01-31 06:47:22.44083819 +0000 UTC m=+0.527856925 container remove 02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645 (image=quay.io/ceph/ceph:v18, name=happy_meitner, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 01:47:22 np0005603541 systemd[1]: libpod-conmon-02a79c80367ebae8a578da27184f53802ce6b981dc02c120dc3b2b90b7266645.scope: Deactivated successfully.
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'zabbix'
Jan 31 01:47:22 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:22.727+0000 7f09985c8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 01:47:22 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:22.955+0000 7f09985c8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: ms_deliver_dispatch: unhandled message 0x559cbfa78f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gghdjs
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr handle_mgr_map Activating!
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr handle_mgr_map I am now activating
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.gghdjs(active, starting, since 0.0105557s)
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gghdjs", "id": "compute-0.gghdjs"} v 0) v1
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gghdjs", "id": "compute-0.gghdjs"}]: dispatch
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: balancer
Jan 31 01:47:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Manager daemon compute-0.gghdjs is now available
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: crash
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer INFO root] Starting
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:47:22
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [balancer INFO root] No pools available
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: devicehealth
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] Starting
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: iostat
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: nfs
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: orchestrator
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: pg_autoscaler
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: progress
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [progress INFO root] Loading...
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [progress INFO root] No stored events to load
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [progress INFO root] Loaded [] historic events
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] recovery thread starting
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] starting setup
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: rbd_support
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: restful
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 01:47:22 np0005603541 ceph-mgr[74648]: [restful WARNING root] server not running: no certificate configured
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: status
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"} v 0) v1
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"}]: dispatch
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: telemetry
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] PerfHandler: starting
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TaskHandler: starting
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"} v 0) v1
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"}]: dispatch
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] setup complete
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 31 01:47:23 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: volumes
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: Activating manager daemon compute-0.gghdjs
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: Manager daemon compute-0.gghdjs is now available
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"}]: dispatch
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"}]: dispatch
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: from='mgr.14102 192.168.122.100:0/2388803382' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:23 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.gghdjs(active, since 1.02486s)
Jan 31 01:47:24 np0005603541 podman[75203]: 2026-01-31 06:47:24.497970851 +0000 UTC m=+0.039398239 container create 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:47:24 np0005603541 systemd[1]: Started libpod-conmon-72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed.scope.
Jan 31 01:47:24 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a8e7974d2232938acf9bc560db343e89293d6f0e1394895aa31f326d700203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a8e7974d2232938acf9bc560db343e89293d6f0e1394895aa31f326d700203/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a8e7974d2232938acf9bc560db343e89293d6f0e1394895aa31f326d700203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:24 np0005603541 podman[75203]: 2026-01-31 06:47:24.553453901 +0000 UTC m=+0.094881319 container init 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:47:24 np0005603541 podman[75203]: 2026-01-31 06:47:24.558861084 +0000 UTC m=+0.100288482 container start 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:24 np0005603541 podman[75203]: 2026-01-31 06:47:24.562047592 +0000 UTC m=+0.103475010 container attach 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:47:24 np0005603541 podman[75203]: 2026-01-31 06:47:24.478572774 +0000 UTC m=+0.020000252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:24 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.gghdjs(active, since 2s)
Jan 31 01:47:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 01:47:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165201664' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]: 
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]: {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "health": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "status": "HEALTH_OK",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "checks": {},
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "mutes": []
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "election_epoch": 5,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "quorum": [
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        0
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    ],
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "quorum_names": [
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "compute-0"
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    ],
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "quorum_age": 26,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "monmap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "epoch": 1,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "min_mon_release_name": "reef",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_mons": 1
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "osdmap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "epoch": 1,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_osds": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_up_osds": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "osd_up_since": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_in_osds": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "osd_in_since": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_remapped_pgs": 0
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "pgmap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "pgs_by_state": [],
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_pgs": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_pools": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_objects": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "data_bytes": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "bytes_used": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "bytes_avail": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "bytes_total": 0
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "fsmap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "epoch": 1,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "by_rank": [],
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "up:standby": 0
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "mgrmap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "available": true,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "num_standbys": 0,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "modules": [
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:            "iostat",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:            "nfs",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:            "restful"
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        ],
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "services": {}
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "servicemap": {
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "epoch": 1,
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "modified": "2026-01-31T06:46:56.268384+0000",
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:        "services": {}
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    },
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]:    "progress_events": {}
Jan 31 01:47:25 np0005603541 keen_ptolemy[75219]: }
Jan 31 01:47:25 np0005603541 systemd[1]: libpod-72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed.scope: Deactivated successfully.
Jan 31 01:47:25 np0005603541 podman[75203]: 2026-01-31 06:47:25.180154131 +0000 UTC m=+0.721581549 container died 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:47:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c4a8e7974d2232938acf9bc560db343e89293d6f0e1394895aa31f326d700203-merged.mount: Deactivated successfully.
Jan 31 01:47:25 np0005603541 podman[75203]: 2026-01-31 06:47:25.248356894 +0000 UTC m=+0.789784282 container remove 72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed (image=quay.io/ceph/ceph:v18, name=keen_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:25 np0005603541 systemd[1]: libpod-conmon-72990db920898e9d43fe38746149a3a86100b56e5dd6ac762c57a572f488baed.scope: Deactivated successfully.
Jan 31 01:47:25 np0005603541 podman[75257]: 2026-01-31 06:47:25.303268512 +0000 UTC m=+0.040904826 container create 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:47:25 np0005603541 systemd[1]: Started libpod-conmon-90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38.scope.
Jan 31 01:47:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f6a8499f6f3dfe1db1b5a08d4e707cd12cb36e5518ba0d59b553ae3303af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f6a8499f6f3dfe1db1b5a08d4e707cd12cb36e5518ba0d59b553ae3303af1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f6a8499f6f3dfe1db1b5a08d4e707cd12cb36e5518ba0d59b553ae3303af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f6a8499f6f3dfe1db1b5a08d4e707cd12cb36e5518ba0d59b553ae3303af1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:25 np0005603541 podman[75257]: 2026-01-31 06:47:25.27996897 +0000 UTC m=+0.017605314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:25 np0005603541 podman[75257]: 2026-01-31 06:47:25.389669862 +0000 UTC m=+0.127306206 container init 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:25 np0005603541 podman[75257]: 2026-01-31 06:47:25.396477939 +0000 UTC m=+0.134114263 container start 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:25 np0005603541 podman[75257]: 2026-01-31 06:47:25.400485407 +0000 UTC m=+0.138121821 container attach 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 01:47:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1647704965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:47:25 np0005603541 systemd[1]: libpod-90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38.scope: Deactivated successfully.
Jan 31 01:47:25 np0005603541 podman[75299]: 2026-01-31 06:47:25.97274266 +0000 UTC m=+0.027278751 container died 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 31 01:47:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-002f6a8499f6f3dfe1db1b5a08d4e707cd12cb36e5518ba0d59b553ae3303af1-merged.mount: Deactivated successfully.
Jan 31 01:47:26 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1647704965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:47:26 np0005603541 podman[75299]: 2026-01-31 06:47:26.010367544 +0000 UTC m=+0.064903605 container remove 90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38 (image=quay.io/ceph/ceph:v18, name=vigorous_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:26 np0005603541 systemd[1]: libpod-conmon-90cc2a707f2b61e464217202803ff2b6d52e8f8154a5f45eff76545cf7a26f38.scope: Deactivated successfully.
Jan 31 01:47:26 np0005603541 podman[75313]: 2026-01-31 06:47:26.09414516 +0000 UTC m=+0.068405970 container create f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:26 np0005603541 podman[75313]: 2026-01-31 06:47:26.04404766 +0000 UTC m=+0.018308490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:26 np0005603541 systemd[1]: Started libpod-conmon-f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec.scope.
Jan 31 01:47:26 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557a56a3d73bfab4e24da8fb64b9c26539d75109098a1f5f7f5496a2d99f2ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557a56a3d73bfab4e24da8fb64b9c26539d75109098a1f5f7f5496a2d99f2ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557a56a3d73bfab4e24da8fb64b9c26539d75109098a1f5f7f5496a2d99f2ec6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:26 np0005603541 podman[75313]: 2026-01-31 06:47:26.19970427 +0000 UTC m=+0.173965100 container init f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:47:26 np0005603541 podman[75313]: 2026-01-31 06:47:26.204163609 +0000 UTC m=+0.178424419 container start f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:26 np0005603541 podman[75313]: 2026-01-31 06:47:26.214836051 +0000 UTC m=+0.189096911 container attach f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 31 01:47:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/903407325' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 01:47:26 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:27 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/903407325' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 01:47:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/903407325' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  1: '-n'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  2: 'mgr.compute-0.gghdjs'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  3: '-f'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  4: '--setuser'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  5: 'ceph'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  6: '--setgroup'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  7: 'ceph'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr respawn  exe_path /proc/self/exe
Jan 31 01:47:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.gghdjs(active, since 4s)
Jan 31 01:47:27 np0005603541 systemd[1]: libpod-f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec.scope: Deactivated successfully.
Jan 31 01:47:27 np0005603541 podman[75313]: 2026-01-31 06:47:27.225346058 +0000 UTC m=+1.199606888 container died f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:27 np0005603541 systemd[1]: var-lib-containers-storage-overlay-557a56a3d73bfab4e24da8fb64b9c26539d75109098a1f5f7f5496a2d99f2ec6-merged.mount: Deactivated successfully.
Jan 31 01:47:27 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: ignoring --setuser ceph since I am not root
Jan 31 01:47:27 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: ignoring --setgroup ceph since I am not root
Jan 31 01:47:27 np0005603541 podman[75313]: 2026-01-31 06:47:27.271135312 +0000 UTC m=+1.245396122 container remove f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec (image=quay.io/ceph/ceph:v18, name=musing_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: pidfile_write: ignore empty --pid-file
Jan 31 01:47:27 np0005603541 systemd[1]: libpod-conmon-f988d3867acb150b77b96a94271bc0fd0671310da8cabfefb7d9560d68cee1ec.scope: Deactivated successfully.
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.32644842 +0000 UTC m=+0.038732152 container create b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 31 01:47:27 np0005603541 systemd[1]: Started libpod-conmon-b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0.scope.
Jan 31 01:47:27 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd31e33a02c6e71c7cc09321339cb311c9354078c1243d6f4656a3ca8b35bcb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd31e33a02c6e71c7cc09321339cb311c9354078c1243d6f4656a3ca8b35bcb0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd31e33a02c6e71c7cc09321339cb311c9354078c1243d6f4656a3ca8b35bcb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'alerts'
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.389538988 +0000 UTC m=+0.101822600 container init b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.394517359 +0000 UTC m=+0.106800881 container start b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.398219391 +0000 UTC m=+0.110502953 container attach b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.310519608 +0000 UTC m=+0.022803140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'balancer'
Jan 31 01:47:27 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:27.689+0000 7f6f30cc6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 01:47:27 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'cephadm'
Jan 31 01:47:27 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:27.957+0000 7f6f30cc6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 01:47:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 01:47:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252622402' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 01:47:27 np0005603541 crazy_germain[75412]: {
Jan 31 01:47:27 np0005603541 crazy_germain[75412]:    "epoch": 5,
Jan 31 01:47:27 np0005603541 crazy_germain[75412]:    "available": true,
Jan 31 01:47:27 np0005603541 crazy_germain[75412]:    "active_name": "compute-0.gghdjs",
Jan 31 01:47:27 np0005603541 crazy_germain[75412]:    "num_standby": 0
Jan 31 01:47:27 np0005603541 crazy_germain[75412]: }
Jan 31 01:47:27 np0005603541 systemd[1]: libpod-b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0.scope: Deactivated successfully.
Jan 31 01:47:27 np0005603541 podman[75380]: 2026-01-31 06:47:27.985311237 +0000 UTC m=+0.697594749 container died b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:47:28 np0005603541 systemd[1]: var-lib-containers-storage-overlay-bd31e33a02c6e71c7cc09321339cb311c9354078c1243d6f4656a3ca8b35bcb0-merged.mount: Deactivated successfully.
Jan 31 01:47:28 np0005603541 podman[75380]: 2026-01-31 06:47:28.045168116 +0000 UTC m=+0.757451628 container remove b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0 (image=quay.io/ceph/ceph:v18, name=crazy_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:28 np0005603541 systemd[1]: libpod-conmon-b480d45212aeb77386b18ea9f78fbe9a4e6712b522eb8599dbd93c6d92360ee0.scope: Deactivated successfully.
Jan 31 01:47:28 np0005603541 podman[75453]: 2026-01-31 06:47:28.099411718 +0000 UTC m=+0.041346007 container create 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:28 np0005603541 systemd[1]: Started libpod-conmon-6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c.scope.
Jan 31 01:47:28 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d1f9bb9d78b88466ec88789c5b24a73a932b263bfae3e1efd0c2a674e8751/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d1f9bb9d78b88466ec88789c5b24a73a932b263bfae3e1efd0c2a674e8751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298d1f9bb9d78b88466ec88789c5b24a73a932b263bfae3e1efd0c2a674e8751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:28 np0005603541 podman[75453]: 2026-01-31 06:47:28.162154067 +0000 UTC m=+0.104088356 container init 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:47:28 np0005603541 podman[75453]: 2026-01-31 06:47:28.166020892 +0000 UTC m=+0.107955181 container start 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:28 np0005603541 podman[75453]: 2026-01-31 06:47:28.16958975 +0000 UTC m=+0.111524039 container attach 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:47:28 np0005603541 podman[75453]: 2026-01-31 06:47:28.076560116 +0000 UTC m=+0.018494425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:28 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/903407325' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 01:47:29 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'crash'
Jan 31 01:47:30 np0005603541 ceph-mgr[74648]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 01:47:30 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'dashboard'
Jan 31 01:47:30 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:30.159+0000 7f6f30cc6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 01:47:31 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'devicehealth'
Jan 31 01:47:31 np0005603541 ceph-mgr[74648]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 01:47:31 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 01:47:31 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:31.989+0000 7f6f30cc6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 01:47:32 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 01:47:32 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 01:47:32 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  from numpy import show_config as show_numpy_config
Jan 31 01:47:32 np0005603541 ceph-mgr[74648]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 01:47:32 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'influx'
Jan 31 01:47:32 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:32.621+0000 7f6f30cc6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 01:47:32 np0005603541 ceph-mgr[74648]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 01:47:32 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'insights'
Jan 31 01:47:32 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:32.909+0000 7f6f30cc6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 01:47:33 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'iostat'
Jan 31 01:47:33 np0005603541 ceph-mgr[74648]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 01:47:33 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'k8sevents'
Jan 31 01:47:33 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:33.457+0000 7f6f30cc6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 01:47:35 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'localpool'
Jan 31 01:47:35 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 01:47:36 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'mirroring'
Jan 31 01:47:36 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'nfs'
Jan 31 01:47:37 np0005603541 ceph-mgr[74648]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 01:47:37 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'orchestrator'
Jan 31 01:47:37 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:37.318+0000 7f6f30cc6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 01:47:38 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:38.005+0000 7f6f30cc6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'osd_support'
Jan 31 01:47:38 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:38.282+0000 7f6f30cc6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 01:47:38 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:38.535+0000 7f6f30cc6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 01:47:38 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'progress'
Jan 31 01:47:38 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:38.815+0000 7f6f30cc6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 01:47:39 np0005603541 ceph-mgr[74648]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 01:47:39 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:39.078+0000 7f6f30cc6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 01:47:39 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'prometheus'
Jan 31 01:47:40 np0005603541 ceph-mgr[74648]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 01:47:40 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rbd_support'
Jan 31 01:47:40 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:40.132+0000 7f6f30cc6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 01:47:40 np0005603541 ceph-mgr[74648]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 01:47:40 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'restful'
Jan 31 01:47:40 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:40.486+0000 7f6f30cc6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 01:47:41 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rgw'
Jan 31 01:47:41 np0005603541 ceph-mgr[74648]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 01:47:42 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'rook'
Jan 31 01:47:42 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:41.999+0000 7f6f30cc6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'selftest'
Jan 31 01:47:44 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:44.234+0000 7f6f30cc6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'snap_schedule'
Jan 31 01:47:44 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:44.489+0000 7f6f30cc6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 01:47:44 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'stats'
Jan 31 01:47:44 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:44.746+0000 7f6f30cc6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 01:47:45 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'status'
Jan 31 01:47:45 np0005603541 ceph-mgr[74648]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 01:47:45 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'telegraf'
Jan 31 01:47:45 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:45.312+0000 7f6f30cc6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 01:47:45 np0005603541 ceph-mgr[74648]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 01:47:45 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'telemetry'
Jan 31 01:47:45 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:45.570+0000 7f6f30cc6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 01:47:46 np0005603541 ceph-mgr[74648]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 01:47:46 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 01:47:46 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:46.249+0000 7f6f30cc6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 01:47:47 np0005603541 ceph-mgr[74648]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:47 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'volumes'
Jan 31 01:47:47 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:47.037+0000 7f6f30cc6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 01:47:47 np0005603541 ceph-mgr[74648]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 01:47:47 np0005603541 ceph-mgr[74648]: mgr[py] Loading python module 'zabbix'
Jan 31 01:47:47 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:47.854+0000 7f6f30cc6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 01:47:48 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:47:48.149+0000 7f6f30cc6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Active manager daemon compute-0.gghdjs restarted
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gghdjs
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: ms_deliver_dispatch: unhandled message 0x559600fea420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr handle_mgr_map Activating!
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr handle_mgr_map I am now activating
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.gghdjs(active, starting, since 0.0155041s)
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gghdjs", "id": "compute-0.gghdjs"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gghdjs", "id": "compute-0.gghdjs"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: balancer
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Manager daemon compute-0.gghdjs is now available
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Starting
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:47:48
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] No pools available
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: Active manager daemon compute-0.gghdjs restarted
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: Activating manager daemon compute-0.gghdjs
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: Manager daemon compute-0.gghdjs is now available
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: cephadm
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: crash
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: devicehealth
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] Starting
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: iostat
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: nfs
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: orchestrator
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: pg_autoscaler
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: progress
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [progress INFO root] Loading...
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [progress INFO root] No stored events to load
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [progress INFO root] Loaded [] historic events
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] recovery thread starting
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] starting setup
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: rbd_support
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: restful
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: status
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [restful WARNING root] server not running: no certificate configured
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: telemetry
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] PerfHandler: starting
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TaskHandler: starting
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"} v 0) v1
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"}]: dispatch
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] setup complete
Jan 31 01:47:48 np0005603541 ceph-mgr[74648]: mgr load Constructed class from module: volumes
Jan 31 01:47:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019928245 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:47:49 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.gghdjs(active, since 1.11472s)
Jan 31 01:47:49 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 01:47:49 np0005603541 suspicious_knuth[75470]: {
Jan 31 01:47:49 np0005603541 suspicious_knuth[75470]:    "mgrmap_epoch": 7,
Jan 31 01:47:49 np0005603541 suspicious_knuth[75470]:    "initialized": true
Jan 31 01:47:49 np0005603541 suspicious_knuth[75470]: }
Jan 31 01:47:49 np0005603541 systemd[1]: libpod-6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c.scope: Deactivated successfully.
Jan 31 01:47:49 np0005603541 podman[75453]: 2026-01-31 06:47:49.297091132 +0000 UTC m=+21.239025411 container died 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: Found migration_current of "None". Setting to last migration.
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/mirror_snapshot_schedule"}]: dispatch
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gghdjs/trash_purge_schedule"}]: dispatch
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 31 01:47:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-298d1f9bb9d78b88466ec88789c5b24a73a932b263bfae3e1efd0c2a674e8751-merged.mount: Deactivated successfully.
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 31 01:47:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:49 np0005603541 podman[75453]: 2026-01-31 06:47:49.729295248 +0000 UTC m=+21.671229547 container remove 6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c (image=quay.io/ceph/ceph:v18, name=suspicious_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:47:49 np0005603541 systemd[1]: libpod-conmon-6642e0fbfe3f6b91a4987f7ab3cd253dc80139efc23b48dfd1019d45697d315c.scope: Deactivated successfully.
Jan 31 01:47:49 np0005603541 podman[75629]: 2026-01-31 06:47:49.790626344 +0000 UTC m=+0.024563515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:49 np0005603541 podman[75629]: 2026-01-31 06:47:49.901039383 +0000 UTC m=+0.134976554 container create e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:50 np0005603541 systemd[1]: Started libpod-conmon-e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01.scope.
Jan 31 01:47:50 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1d730a1bcfa9948b1e310babaef20188df895d4b241dd555fe2a860df2b47a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1d730a1bcfa9948b1e310babaef20188df895d4b241dd555fe2a860df2b47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1d730a1bcfa9948b1e310babaef20188df895d4b241dd555fe2a860df2b47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:50 np0005603541 podman[75629]: 2026-01-31 06:47:50.220555394 +0000 UTC m=+0.454492605 container init e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:47:50 np0005603541 podman[75629]: 2026-01-31 06:47:50.224959412 +0000 UTC m=+0.458896583 container start e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:50 np0005603541 podman[75629]: 2026-01-31 06:47:50.282038193 +0000 UTC m=+0.515975394 container attach e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.gghdjs(active, since 2s)
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cherrypy.error] [31/Jan/2026:06:47:50] ENGINE Bus STARTING
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : [31/Jan/2026:06:47:50] ENGINE Bus STARTING
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cherrypy.error] [31/Jan/2026:06:47:50] ENGINE Serving on http://192.168.122.100:8765
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : [31/Jan/2026:06:47:50] ENGINE Serving on http://192.168.122.100:8765
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cherrypy.error] [31/Jan/2026:06:47:50] ENGINE Serving on https://192.168.122.100:7150
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : [31/Jan/2026:06:47:50] ENGINE Serving on https://192.168.122.100:7150
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cherrypy.error] [31/Jan/2026:06:47:50] ENGINE Bus STARTED
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : [31/Jan/2026:06:47:50] ENGINE Bus STARTED
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:47:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cherrypy.error] [31/Jan/2026:06:47:50] ENGINE Client ('192.168.122.100', 35180) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 01:47:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : [31/Jan/2026:06:47:50] ENGINE Client ('192.168.122.100', 35180) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 01:47:50 np0005603541 systemd[1]: libpod-e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01.scope: Deactivated successfully.
Jan 31 01:47:50 np0005603541 podman[75629]: 2026-01-31 06:47:50.991201184 +0000 UTC m=+1.225138355 container died e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 01:47:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ae1d730a1bcfa9948b1e310babaef20188df895d4b241dd555fe2a860df2b47a-merged.mount: Deactivated successfully.
Jan 31 01:47:51 np0005603541 podman[75629]: 2026-01-31 06:47:51.466122499 +0000 UTC m=+1.700059700 container remove e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01 (image=quay.io/ceph/ceph:v18, name=upbeat_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:47:51 np0005603541 systemd[1]: libpod-conmon-e1e021c8fe2f9db878b278553c5dc9edce300d6c9da41642cd9346884ad5ed01.scope: Deactivated successfully.
Jan 31 01:47:51 np0005603541 podman[75706]: 2026-01-31 06:47:51.584445582 +0000 UTC m=+0.103471470 container create 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:51 np0005603541 podman[75706]: 2026-01-31 06:47:51.503347382 +0000 UTC m=+0.022373260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:51 np0005603541 systemd[1]: Started libpod-conmon-3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118.scope.
Jan 31 01:47:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5453b15134b7c825d9b39337054fbb9d29bac31b3f5ca2d57a3bf092aa0d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5453b15134b7c825d9b39337054fbb9d29bac31b3f5ca2d57a3bf092aa0d90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5453b15134b7c825d9b39337054fbb9d29bac31b3f5ca2d57a3bf092aa0d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:51 np0005603541 podman[75706]: 2026-01-31 06:47:51.674503272 +0000 UTC m=+0.193529150 container init 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:51 np0005603541 podman[75706]: 2026-01-31 06:47:51.678531761 +0000 UTC m=+0.197557619 container start 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 01:47:51 np0005603541 podman[75706]: 2026-01-31 06:47:51.728428126 +0000 UTC m=+0.247454014 container attach 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: [31/Jan/2026:06:47:50] ENGINE Bus STARTING
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: [31/Jan/2026:06:47:50] ENGINE Serving on http://192.168.122.100:8765
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: [31/Jan/2026:06:47:50] ENGINE Serving on https://192.168.122.100:7150
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: [31/Jan/2026:06:47:50] ENGINE Bus STARTED
Jan 31 01:47:51 np0005603541 ceph-mon[74355]: [31/Jan/2026:06:47:50] ENGINE Client ('192.168.122.100', 35180) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Set ssh ssh_user
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Set ssh ssh_config
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 01:47:52 np0005603541 strange_neumann[75721]: ssh user set to ceph-admin. sudo will be used
Jan 31 01:47:52 np0005603541 systemd[1]: libpod-3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118.scope: Deactivated successfully.
Jan 31 01:47:52 np0005603541 podman[75706]: 2026-01-31 06:47:52.246473188 +0000 UTC m=+0.765499046 container died 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:47:52 np0005603541 systemd[1]: var-lib-containers-storage-overlay-de5453b15134b7c825d9b39337054fbb9d29bac31b3f5ca2d57a3bf092aa0d90-merged.mount: Deactivated successfully.
Jan 31 01:47:52 np0005603541 podman[75706]: 2026-01-31 06:47:52.289021482 +0000 UTC m=+0.808047340 container remove 3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118 (image=quay.io/ceph/ceph:v18, name=strange_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:52 np0005603541 systemd[1]: libpod-conmon-3a1b6a0744f41bf93ed9b9b44ed5fe037d9ba50647fc8bde9394f0853110e118.scope: Deactivated successfully.
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.339803239 +0000 UTC m=+0.035045301 container create 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:52 np0005603541 systemd[1]: Started libpod-conmon-988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac.scope.
Jan 31 01:47:52 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.413712382 +0000 UTC m=+0.108954504 container init 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.420067448 +0000 UTC m=+0.115309510 container start 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.325042726 +0000 UTC m=+0.020284818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.423891292 +0000 UTC m=+0.119133374 container attach 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 31 01:47:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Set ssh private key
Jan 31 01:47:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 01:47:52 np0005603541 systemd[1]: libpod-988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac.scope: Deactivated successfully.
Jan 31 01:47:52 np0005603541 podman[75758]: 2026-01-31 06:47:52.999158348 +0000 UTC m=+0.694400470 container died 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 01:47:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ca92fcb720d1668fa2be57905b3b5692c3a991eab46380a2bb9da2b4ed60777a-merged.mount: Deactivated successfully.
Jan 31 01:47:53 np0005603541 podman[75758]: 2026-01-31 06:47:53.039978001 +0000 UTC m=+0.735220053 container remove 988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac (image=quay.io/ceph/ceph:v18, name=youthful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 01:47:53 np0005603541 systemd[1]: libpod-conmon-988b74bd3f861df195dc2e5ce8324d3c92e758b19a5e4ca50afcb6f66cad18ac.scope: Deactivated successfully.
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.102388472 +0000 UTC m=+0.045051247 container create fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:47:53 np0005603541 systemd[1]: Started libpod-conmon-fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c.scope.
Jan 31 01:47:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.173787244 +0000 UTC m=+0.116450049 container init fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.082235087 +0000 UTC m=+0.024897892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.180694413 +0000 UTC m=+0.123357188 container start fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.186062015 +0000 UTC m=+0.128724800 container attach fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: Set ssh ssh_user
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: Set ssh ssh_config
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: ssh user set to ceph-admin. sudo will be used
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053121 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:47:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 31 01:47:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:53 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 01:47:53 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 01:47:53 np0005603541 systemd[1]: libpod-fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c.scope: Deactivated successfully.
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.802731637 +0000 UTC m=+0.745394412 container died fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8580c606c83c9825455bb17098ff6f16b5b6aaf925963274f874f8b90facf7f2-merged.mount: Deactivated successfully.
Jan 31 01:47:53 np0005603541 podman[75813]: 2026-01-31 06:47:53.843403765 +0000 UTC m=+0.786066540 container remove fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c (image=quay.io/ceph/ceph:v18, name=competent_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:47:53 np0005603541 systemd[1]: libpod-conmon-fccdbdd6eff71366b55049ab8039e43f0ab3516c0416e4e5b7f21bfb07b2fd3c.scope: Deactivated successfully.
Jan 31 01:47:53 np0005603541 podman[75871]: 2026-01-31 06:47:53.892628674 +0000 UTC m=+0.035143293 container create 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:53 np0005603541 systemd[1]: Started libpod-conmon-3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115.scope.
Jan 31 01:47:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95966225ac2c2c1da4a528febfd74c06e99359a0cb7a8106e20613f9453e7f03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95966225ac2c2c1da4a528febfd74c06e99359a0cb7a8106e20613f9453e7f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95966225ac2c2c1da4a528febfd74c06e99359a0cb7a8106e20613f9453e7f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:53 np0005603541 podman[75871]: 2026-01-31 06:47:53.96827858 +0000 UTC m=+0.110793229 container init 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:47:53 np0005603541 podman[75871]: 2026-01-31 06:47:53.972695649 +0000 UTC m=+0.115210278 container start 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:47:53 np0005603541 podman[75871]: 2026-01-31 06:47:53.877473472 +0000 UTC m=+0.019988111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:53 np0005603541 podman[75871]: 2026-01-31 06:47:53.976868571 +0000 UTC m=+0.119383220 container attach 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:47:54 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:54 np0005603541 ceph-mon[74355]: Set ssh ssh_identity_key
Jan 31 01:47:54 np0005603541 ceph-mon[74355]: Set ssh private key
Jan 31 01:47:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:54 np0005603541 objective_napier[75888]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC04sdBV6P/yGmd/RCAuS5X4kJNcffeyVA8j91lKtJBqSOlP/hxYzwTwe9Zi8qWVSoUw+P6asTtAXhdGsZdNzfTsVno6jSuNOaL89HfOgXzcp79woVRHURI8VBzQBq3HgTH0piaPhcaZJkPlpey/+cQpZS0rbkPXmjkmk0k3j0oft4+nsYyEl/p7PO765oT6K6y6/3g110Kvcgs9xuctgxHm5XSjps+22soegkNAftt2rEJFu0TQgEdONK6LIXP/kOXR6vBdhw15/5MzYP54OCzGRt8gh+5nnnVbyNcYF+hQbWNneeWLio8vryywQUDpbGc+zwDFwFeMgougMGmldLgonde7LMl4hYEOoyERGX2ppSGmDbIARtwR4Tvb6q1Ziis6e1vIccR1auVJVZn2CQsuZve8dV4s0+LsyaS9szVWjVzTFK4IcY0iHvGvXNOlDxpN692UH1RZ09bug1wWSkHPJuyFLFB/afFliCT9noKh+sNN5Pol93eviteH6TdPkc= zuul@controller
Jan 31 01:47:54 np0005603541 systemd[1]: libpod-3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115.scope: Deactivated successfully.
Jan 31 01:47:54 np0005603541 podman[75871]: 2026-01-31 06:47:54.571095813 +0000 UTC m=+0.713610462 container died 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:47:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay-95966225ac2c2c1da4a528febfd74c06e99359a0cb7a8106e20613f9453e7f03-merged.mount: Deactivated successfully.
Jan 31 01:47:54 np0005603541 podman[75871]: 2026-01-31 06:47:54.600075674 +0000 UTC m=+0.742590293 container remove 3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115 (image=quay.io/ceph/ceph:v18, name=objective_napier, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 01:47:54 np0005603541 systemd[1]: libpod-conmon-3f5973fb0b55b4266062049a5f86dc2e17fb4d749d17d4d2daf7e48781e96115.scope: Deactivated successfully.
Jan 31 01:47:54 np0005603541 podman[75927]: 2026-01-31 06:47:54.658144578 +0000 UTC m=+0.042105014 container create 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:54 np0005603541 systemd[1]: Started libpod-conmon-91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0.scope.
Jan 31 01:47:54 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:47:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f6d8a211b681e937324b34f2c8c4ce0fab8c80342c215e7216b4aca1cf6f44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f6d8a211b681e937324b34f2c8c4ce0fab8c80342c215e7216b4aca1cf6f44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f6d8a211b681e937324b34f2c8c4ce0fab8c80342c215e7216b4aca1cf6f44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:47:54 np0005603541 podman[75927]: 2026-01-31 06:47:54.71442352 +0000 UTC m=+0.098383976 container init 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:54 np0005603541 podman[75927]: 2026-01-31 06:47:54.718960581 +0000 UTC m=+0.102921017 container start 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:47:54 np0005603541 podman[75927]: 2026-01-31 06:47:54.722089448 +0000 UTC m=+0.106049904 container attach 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 01:47:54 np0005603541 podman[75927]: 2026-01-31 06:47:54.642679449 +0000 UTC m=+0.026639905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:47:55 np0005603541 ceph-mon[74355]: Set ssh ssh_identity_pub
Jan 31 01:47:55 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:47:55 np0005603541 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 01:47:55 np0005603541 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 01:47:55 np0005603541 systemd-logind[817]: New session 21 of user ceph-admin.
Jan 31 01:47:55 np0005603541 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 01:47:55 np0005603541 systemd[1]: Starting User Manager for UID 42477...
Jan 31 01:47:55 np0005603541 systemd[75973]: Queued start job for default target Main User Target.
Jan 31 01:47:55 np0005603541 systemd[75973]: Created slice User Application Slice.
Jan 31 01:47:55 np0005603541 systemd[75973]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 01:47:55 np0005603541 systemd[75973]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 01:47:55 np0005603541 systemd[75973]: Reached target Paths.
Jan 31 01:47:55 np0005603541 systemd[75973]: Reached target Timers.
Jan 31 01:47:55 np0005603541 systemd[75973]: Starting D-Bus User Message Bus Socket...
Jan 31 01:47:55 np0005603541 systemd[75973]: Starting Create User's Volatile Files and Directories...
Jan 31 01:47:55 np0005603541 systemd[75973]: Finished Create User's Volatile Files and Directories.
Jan 31 01:47:55 np0005603541 systemd[75973]: Listening on D-Bus User Message Bus Socket.
Jan 31 01:47:55 np0005603541 systemd[75973]: Reached target Sockets.
Jan 31 01:47:55 np0005603541 systemd[75973]: Reached target Basic System.
Jan 31 01:47:55 np0005603541 systemd[75973]: Reached target Main User Target.
Jan 31 01:47:55 np0005603541 systemd[75973]: Startup finished in 112ms.
Jan 31 01:47:55 np0005603541 systemd[1]: Started User Manager for UID 42477.
Jan 31 01:47:55 np0005603541 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 01:47:55 np0005603541 systemd-logind[817]: New session 23 of user ceph-admin.
Jan 31 01:47:55 np0005603541 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 01:47:56 np0005603541 systemd-logind[817]: New session 24 of user ceph-admin.
Jan 31 01:47:56 np0005603541 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 01:47:56 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:56 np0005603541 systemd-logind[817]: New session 25 of user ceph-admin.
Jan 31 01:47:56 np0005603541 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 01:47:56 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 01:47:56 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 01:47:56 np0005603541 systemd-logind[817]: New session 26 of user ceph-admin.
Jan 31 01:47:56 np0005603541 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 01:47:57 np0005603541 systemd-logind[817]: New session 27 of user ceph-admin.
Jan 31 01:47:57 np0005603541 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 01:47:57 np0005603541 systemd-logind[817]: New session 28 of user ceph-admin.
Jan 31 01:47:57 np0005603541 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 01:47:57 np0005603541 systemd-logind[817]: New session 29 of user ceph-admin.
Jan 31 01:47:57 np0005603541 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 01:47:58 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:47:58 np0005603541 ceph-mon[74355]: Deploying cephadm binary to compute-0
Jan 31 01:47:58 np0005603541 systemd-logind[817]: New session 30 of user ceph-admin.
Jan 31 01:47:58 np0005603541 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 01:47:58 np0005603541 systemd-logind[817]: New session 31 of user ceph-admin.
Jan 31 01:47:58 np0005603541 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 01:47:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:47:59 np0005603541 systemd-logind[817]: New session 32 of user ceph-admin.
Jan 31 01:47:59 np0005603541 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 01:47:59 np0005603541 systemd-logind[817]: New session 33 of user ceph-admin.
Jan 31 01:47:59 np0005603541 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 01:47:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:47:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:47:59 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Added host compute-0
Jan 31 01:47:59 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 01:47:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:47:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:47:59 np0005603541 condescending_solomon[75943]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 01:47:59 np0005603541 systemd[1]: libpod-91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0.scope: Deactivated successfully.
Jan 31 01:47:59 np0005603541 podman[75927]: 2026-01-31 06:47:59.970039398 +0000 UTC m=+5.353999834 container died 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:47:59 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e1f6d8a211b681e937324b34f2c8c4ce0fab8c80342c215e7216b4aca1cf6f44-merged.mount: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[75927]: 2026-01-31 06:48:00.027775075 +0000 UTC m=+5.411735511 container remove 91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0 (image=quay.io/ceph/ceph:v18, name=condescending_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 01:48:00 np0005603541 systemd[1]: libpod-conmon-91cfe41bf55ddc2601ac14324863be95df47baaa1560f3f08d9a87310ced68b0.scope: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.082427416 +0000 UTC m=+0.035429370 container create 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:00 np0005603541 systemd[1]: Started libpod-conmon-2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7.scope.
Jan 31 01:48:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbfc2bbdc0c37d7c8ea1107af24bb633141bea4c80dbef10f32184453f5fa770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbfc2bbdc0c37d7c8ea1107af24bb633141bea4c80dbef10f32184453f5fa770/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbfc2bbdc0c37d7c8ea1107af24bb633141bea4c80dbef10f32184453f5fa770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.146074688 +0000 UTC m=+0.099076652 container init 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.152494366 +0000 UTC m=+0.105496320 container start 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.156552585 +0000 UTC m=+0.109554579 container attach 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.066553597 +0000 UTC m=+0.019555571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:00 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.411130462 +0000 UTC m=+0.036681911 container create c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:48:00 np0005603541 systemd[1]: Started libpod-conmon-c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb.scope.
Jan 31 01:48:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.479118381 +0000 UTC m=+0.104669850 container init c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.483289024 +0000 UTC m=+0.108840473 container start c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.489083376 +0000 UTC m=+0.114634825 container attach c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.392562267 +0000 UTC m=+0.018113736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:00 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 01:48:00 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:00 np0005603541 silly_nash[76680]: Scheduled mon update...
Jan 31 01:48:00 np0005603541 systemd[1]: libpod-2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7.scope: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.747524827 +0000 UTC m=+0.700526781 container died 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 31 01:48:00 np0005603541 admiring_meninsky[76754]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 01:48:00 np0005603541 systemd[1]: libpod-c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb.scope: Deactivated successfully.
Jan 31 01:48:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fbfc2bbdc0c37d7c8ea1107af24bb633141bea4c80dbef10f32184453f5fa770-merged.mount: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76621]: 2026-01-31 06:48:00.786187666 +0000 UTC m=+0.739189620 container remove 2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7 (image=quay.io/ceph/ceph:v18, name=silly_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 01:48:00 np0005603541 systemd[1]: libpod-conmon-2a2a7a93e3feca31e9fca6e9f447d3415d78f3e81bb1b481a45082e74b0090e7.scope: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76737]: 2026-01-31 06:48:00.79977239 +0000 UTC m=+0.425323869 container died c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:48:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8cedef8d6f16a444ca04e84fe6e2cf9c939eeb08500d7dcda45d40e88f78c21c-merged.mount: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76787]: 2026-01-31 06:48:00.837805613 +0000 UTC m=+0.063789227 container remove c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb (image=quay.io/ceph/ceph:v18, name=admiring_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:00 np0005603541 systemd[1]: libpod-conmon-c865bf42aa729faf33590a08cc98f95ce3e262d7176286135f6227a4ddd768fb.scope: Deactivated successfully.
Jan 31 01:48:00 np0005603541 podman[76802]: 2026-01-31 06:48:00.855126568 +0000 UTC m=+0.055993495 container create eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:00 np0005603541 systemd[1]: Started libpod-conmon-eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059.scope.
Jan 31 01:48:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:00 np0005603541 podman[76802]: 2026-01-31 06:48:00.818499279 +0000 UTC m=+0.019366216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65a51df06bfbc9b62bf225ea6fd5227caf3fc92bfb0703cacebbc54e6f32f3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65a51df06bfbc9b62bf225ea6fd5227caf3fc92bfb0703cacebbc54e6f32f3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65a51df06bfbc9b62bf225ea6fd5227caf3fc92bfb0703cacebbc54e6f32f3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:00 np0005603541 podman[76802]: 2026-01-31 06:48:00.933311576 +0000 UTC m=+0.134178523 container init eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:00 np0005603541 podman[76802]: 2026-01-31 06:48:00.937034728 +0000 UTC m=+0.137901655 container start eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 01:48:00 np0005603541 podman[76802]: 2026-01-31 06:48:00.940230496 +0000 UTC m=+0.141097453 container attach eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: Added host compute-0
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:01 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 01:48:01 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 01:48:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:48:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:01 np0005603541 epic_moore[76821]: Scheduled mgr update...
Jan 31 01:48:01 np0005603541 systemd[1]: libpod-eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059.scope: Deactivated successfully.
Jan 31 01:48:01 np0005603541 podman[76802]: 2026-01-31 06:48:01.532279775 +0000 UTC m=+0.733146712 container died eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:01 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a65a51df06bfbc9b62bf225ea6fd5227caf3fc92bfb0703cacebbc54e6f32f3c-merged.mount: Deactivated successfully.
Jan 31 01:48:01 np0005603541 podman[76802]: 2026-01-31 06:48:01.584053225 +0000 UTC m=+0.784920152 container remove eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059 (image=quay.io/ceph/ceph:v18, name=epic_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:01 np0005603541 systemd[1]: libpod-conmon-eb27aa993f4369b53eb1a3cef1ecf3018d2c77e6cc95fc38f5ac01c802b9f059.scope: Deactivated successfully.
Jan 31 01:48:01 np0005603541 podman[77091]: 2026-01-31 06:48:01.634543104 +0000 UTC m=+0.039047309 container create f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:01 np0005603541 systemd[1]: Started libpod-conmon-f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10.scope.
Jan 31 01:48:01 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780dab6def1dfd4812b7e630ec156186c6f89c66a883c08e88e521e8c5f21501/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780dab6def1dfd4812b7e630ec156186c6f89c66a883c08e88e521e8c5f21501/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780dab6def1dfd4812b7e630ec156186c6f89c66a883c08e88e521e8c5f21501/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:01 np0005603541 podman[77091]: 2026-01-31 06:48:01.701720163 +0000 UTC m=+0.106224398 container init f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:01 np0005603541 podman[77091]: 2026-01-31 06:48:01.706330215 +0000 UTC m=+0.110834420 container start f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 01:48:01 np0005603541 podman[77091]: 2026-01-31 06:48:01.710203881 +0000 UTC m=+0.114708106 container attach f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 01:48:01 np0005603541 podman[77091]: 2026-01-31 06:48:01.622336404 +0000 UTC m=+0.026840630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:01 np0005603541 podman[77169]: 2026-01-31 06:48:01.885655386 +0000 UTC m=+0.052413857 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 01:48:02 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:48:02 np0005603541 podman[77169]: 2026-01-31 06:48:02.194556137 +0000 UTC m=+0.361314598 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:02 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:02 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 01:48:02 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:02 np0005603541 musing_pare[77132]: Scheduled crash update...
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: Saving service mon spec with placement count:5
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:02 np0005603541 podman[77091]: 2026-01-31 06:48:02.279673856 +0000 UTC m=+0.684178081 container died f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:02 np0005603541 systemd[1]: libpod-f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10.scope: Deactivated successfully.
Jan 31 01:48:02 np0005603541 systemd[1]: var-lib-containers-storage-overlay-780dab6def1dfd4812b7e630ec156186c6f89c66a883c08e88e521e8c5f21501-merged.mount: Deactivated successfully.
Jan 31 01:48:02 np0005603541 podman[77091]: 2026-01-31 06:48:02.319316148 +0000 UTC m=+0.723820353 container remove f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10 (image=quay.io/ceph/ceph:v18, name=musing_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:48:02 np0005603541 systemd[1]: libpod-conmon-f5c7f7cbce9939f26b1cce2c70a68b5658c8f4f6c4faa579a6228d46a7bdaf10.scope: Deactivated successfully.
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:02 np0005603541 podman[77249]: 2026-01-31 06:48:02.371119099 +0000 UTC m=+0.037711566 container create 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:02 np0005603541 systemd[1]: Started libpod-conmon-139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00.scope.
Jan 31 01:48:02 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3186d2526d852955f550c760e18d65ec456e9def0d2803797cec15375a836bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3186d2526d852955f550c760e18d65ec456e9def0d2803797cec15375a836bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3186d2526d852955f550c760e18d65ec456e9def0d2803797cec15375a836bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:02 np0005603541 podman[77249]: 2026-01-31 06:48:02.439567759 +0000 UTC m=+0.106160256 container init 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:02 np0005603541 podman[77249]: 2026-01-31 06:48:02.447132714 +0000 UTC m=+0.113725171 container start 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:48:02 np0005603541 podman[77249]: 2026-01-31 06:48:02.354165423 +0000 UTC m=+0.020757900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:02 np0005603541 podman[77249]: 2026-01-31 06:48:02.451190664 +0000 UTC m=+0.117783161 container attach 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:02 np0005603541 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77382 (sysctl)
Jan 31 01:48:02 np0005603541 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 01:48:02 np0005603541 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3642973481' entity='client.admin' 
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77249]: 2026-01-31 06:48:03.063832958 +0000 UTC m=+0.730425455 container died 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:48:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d3186d2526d852955f550c760e18d65ec456e9def0d2803797cec15375a836bb-merged.mount: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77249]: 2026-01-31 06:48:03.104325062 +0000 UTC m=+0.770917529 container remove 139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00 (image=quay.io/ceph/ceph:v18, name=agitated_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-conmon-139c9677b0833eb4d9bb1dc0e4a416fdbe754dc4d81fa8c8e5f979c662564e00.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77490]: 2026-01-31 06:48:03.151068449 +0000 UTC m=+0.033148615 container create daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:03 np0005603541 systemd[1]: Started libpod-conmon-daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6.scope.
Jan 31 01:48:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6d464c98572603bc4119123b10bd24b909b954c9a1bff565a3d796e3807fa0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6d464c98572603bc4119123b10bd24b909b954c9a1bff565a3d796e3807fa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6d464c98572603bc4119123b10bd24b909b954c9a1bff565a3d796e3807fa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:03 np0005603541 podman[77490]: 2026-01-31 06:48:03.134738788 +0000 UTC m=+0.016818974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:03 np0005603541 podman[77490]: 2026-01-31 06:48:03.238449603 +0000 UTC m=+0.120529789 container init daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:03 np0005603541 podman[77490]: 2026-01-31 06:48:03.243501357 +0000 UTC m=+0.125581543 container start daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:48:03 np0005603541 podman[77490]: 2026-01-31 06:48:03.248035888 +0000 UTC m=+0.130116054 container attach daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: Saving service mgr spec with placement count:2
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: Saving service crash spec with placement *
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3642973481' entity='client.admin' 
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:03 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 31 01:48:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.846952905 +0000 UTC m=+0.031464593 container create 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:48:03 np0005603541 podman[77748]: 2026-01-31 06:48:03.855696029 +0000 UTC m=+0.029781731 container died daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 01:48:03 np0005603541 systemd[1]: Started libpod-conmon-2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba.scope.
Jan 31 01:48:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fa6d464c98572603bc4119123b10bd24b909b954c9a1bff565a3d796e3807fa0-merged.mount: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77748]: 2026-01-31 06:48:03.894192875 +0000 UTC m=+0.068278557 container remove daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6 (image=quay.io/ceph/ceph:v18, name=cool_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:48:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-conmon-daded76c72f2c34cf42e168ca04191bae18bcb25aefe5cfa6bdb631f4ff6eeb6.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.909265735 +0000 UTC m=+0.093777443 container init 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.913720084 +0000 UTC m=+0.098231772 container start 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:03 np0005603541 relaxed_bartik[77770]: 167 167
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.917345762 +0000 UTC m=+0.101857480 container attach 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.917985418 +0000 UTC m=+0.102497106 container died 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.832787488 +0000 UTC m=+0.017299206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-09c7f6e4b487960a6617244b18d5a8207cb6fca1e4890271cbf5b0cfc3d8e3ab-merged.mount: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77741]: 2026-01-31 06:48:03.951060419 +0000 UTC m=+0.135572107 container remove 2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:48:03 np0005603541 systemd[1]: libpod-conmon-2b00fbbc29f7d911574183a467ded4553d5cb3d177b803dfdd5a7243f2d67cba.scope: Deactivated successfully.
Jan 31 01:48:03 np0005603541 podman[77773]: 2026-01-31 06:48:03.99669144 +0000 UTC m=+0.084680209 container create 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:04 np0005603541 systemd[1]: Started libpod-conmon-7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e.scope.
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:03.940792548 +0000 UTC m=+0.028781327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:04 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ae202a18fe24a2369abd5e61dec62db4d3055cb51bc0aad9cabb665cd6976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ae202a18fe24a2369abd5e61dec62db4d3055cb51bc0aad9cabb665cd6976/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ae202a18fe24a2369abd5e61dec62db4d3055cb51bc0aad9cabb665cd6976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:04.054280563 +0000 UTC m=+0.142269342 container init 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:04.057877061 +0000 UTC m=+0.145865820 container start 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:04.061483429 +0000 UTC m=+0.149472188 container attach 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 01:48:04 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:48:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:04 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:04 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 01:48:04 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 01:48:04 np0005603541 xenodochial_bose[77803]: Added label _admin to host compute-0
Jan 31 01:48:04 np0005603541 systemd[1]: libpod-7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e.scope: Deactivated successfully.
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:04.649755346 +0000 UTC m=+0.737744145 container died 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-978ae202a18fe24a2369abd5e61dec62db4d3055cb51bc0aad9cabb665cd6976-merged.mount: Deactivated successfully.
Jan 31 01:48:04 np0005603541 podman[77773]: 2026-01-31 06:48:04.685865251 +0000 UTC m=+0.773854010 container remove 7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 01:48:04 np0005603541 systemd[1]: libpod-conmon-7f66037941c311b0e3f600b578789f0f31bc8cea89215e2154c78e0e6f63f92e.scope: Deactivated successfully.
Jan 31 01:48:04 np0005603541 podman[77841]: 2026-01-31 06:48:04.733608443 +0000 UTC m=+0.033876092 container create 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:04 np0005603541 systemd[1]: Started libpod-conmon-62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50.scope.
Jan 31 01:48:04 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5281aa21202b61ada7a9ccb873d77c76d474e245ae5a6333f18114eb4ddaf34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5281aa21202b61ada7a9ccb873d77c76d474e245ae5a6333f18114eb4ddaf34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5281aa21202b61ada7a9ccb873d77c76d474e245ae5a6333f18114eb4ddaf34e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:04 np0005603541 podman[77841]: 2026-01-31 06:48:04.805646911 +0000 UTC m=+0.105914630 container init 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:04 np0005603541 podman[77841]: 2026-01-31 06:48:04.809837214 +0000 UTC m=+0.110104873 container start 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 01:48:04 np0005603541 podman[77841]: 2026-01-31 06:48:04.813972815 +0000 UTC m=+0.114240494 container attach 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:04 np0005603541 podman[77841]: 2026-01-31 06:48:04.720664085 +0000 UTC m=+0.020931764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 31 01:48:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2984268029' entity='client.admin' 
Jan 31 01:48:05 np0005603541 systemd[1]: libpod-62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50.scope: Deactivated successfully.
Jan 31 01:48:05 np0005603541 podman[77841]: 2026-01-31 06:48:05.37121676 +0000 UTC m=+0.671484429 container died 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 01:48:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5281aa21202b61ada7a9ccb873d77c76d474e245ae5a6333f18114eb4ddaf34e-merged.mount: Deactivated successfully.
Jan 31 01:48:05 np0005603541 podman[77841]: 2026-01-31 06:48:05.416201854 +0000 UTC m=+0.716469513 container remove 62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50 (image=quay.io/ceph/ceph:v18, name=practical_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:05 np0005603541 systemd[1]: libpod-conmon-62df0350392e21108dad6f65ce74de683b8c3d2029a1299add53bf4862c35c50.scope: Deactivated successfully.
Jan 31 01:48:05 np0005603541 podman[77895]: 2026-01-31 06:48:05.466465997 +0000 UTC m=+0.036051426 container create 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:05 np0005603541 systemd[1]: Started libpod-conmon-7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88.scope.
Jan 31 01:48:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fb95023df6e9a022f3e86fff00b87273feeaa2c456e1b9830e237ec5cd064c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fb95023df6e9a022f3e86fff00b87273feeaa2c456e1b9830e237ec5cd064c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fb95023df6e9a022f3e86fff00b87273feeaa2c456e1b9830e237ec5cd064c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:05 np0005603541 podman[77895]: 2026-01-31 06:48:05.532441036 +0000 UTC m=+0.102026455 container init 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:48:05 np0005603541 podman[77895]: 2026-01-31 06:48:05.536624698 +0000 UTC m=+0.106210117 container start 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:48:05 np0005603541 podman[77895]: 2026-01-31 06:48:05.541847446 +0000 UTC m=+0.111432885 container attach 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:05 np0005603541 podman[77895]: 2026-01-31 06:48:05.447708486 +0000 UTC m=+0.017293915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:05 np0005603541 ceph-mon[74355]: Added label _admin to host compute-0
Jan 31 01:48:05 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2984268029' entity='client.admin' 
Jan 31 01:48:06 np0005603541 ceph-mgr[74648]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 01:48:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 31 01:48:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2199420163' entity='client.admin' 
Jan 31 01:48:06 np0005603541 fervent_dirac[77911]: set mgr/dashboard/cluster/status
Jan 31 01:48:06 np0005603541 systemd[1]: libpod-7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88.scope: Deactivated successfully.
Jan 31 01:48:06 np0005603541 podman[77895]: 2026-01-31 06:48:06.211338645 +0000 UTC m=+0.780924064 container died 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:48:06 np0005603541 systemd[1]: var-lib-containers-storage-overlay-50fb95023df6e9a022f3e86fff00b87273feeaa2c456e1b9830e237ec5cd064c-merged.mount: Deactivated successfully.
Jan 31 01:48:06 np0005603541 podman[77895]: 2026-01-31 06:48:06.246778864 +0000 UTC m=+0.816364273 container remove 7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88 (image=quay.io/ceph/ceph:v18, name=fervent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:06 np0005603541 systemd[1]: libpod-conmon-7c62a633c498f27f2f27085a70e3507e1594598dd0301f02e267d7123b458e88.scope: Deactivated successfully.
Jan 31 01:48:06 np0005603541 podman[77957]: 2026-01-31 06:48:06.390761477 +0000 UTC m=+0.038865705 container create 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 01:48:06 np0005603541 systemd[1]: Started libpod-conmon-8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40.scope.
Jan 31 01:48:06 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd07b698c23cff7d6fc9af16d7e99dc563cdcd5c97e5dd40a6cb570e36573c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd07b698c23cff7d6fc9af16d7e99dc563cdcd5c97e5dd40a6cb570e36573c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd07b698c23cff7d6fc9af16d7e99dc563cdcd5c97e5dd40a6cb570e36573c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd07b698c23cff7d6fc9af16d7e99dc563cdcd5c97e5dd40a6cb570e36573c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:06 np0005603541 podman[77957]: 2026-01-31 06:48:06.463845871 +0000 UTC m=+0.111950119 container init 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:06 np0005603541 podman[77957]: 2026-01-31 06:48:06.46869002 +0000 UTC m=+0.116794248 container start 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:48:06 np0005603541 podman[77957]: 2026-01-31 06:48:06.373297878 +0000 UTC m=+0.021402106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:06 np0005603541 podman[77957]: 2026-01-31 06:48:06.472310548 +0000 UTC m=+0.120414776 container attach 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 01:48:06 np0005603541 python3[78003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:06 np0005603541 podman[78004]: 2026-01-31 06:48:06.96015063 +0000 UTC m=+0.098067967 container create c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:06 np0005603541 podman[78004]: 2026-01-31 06:48:06.890346477 +0000 UTC m=+0.028263804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:07 np0005603541 systemd[1]: Started libpod-conmon-c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240.scope.
Jan 31 01:48:07 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160a4fe24c302f8b126ccf0ae14e693a409b8994ed0726de9e11a47aa04ae80e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160a4fe24c302f8b126ccf0ae14e693a409b8994ed0726de9e11a47aa04ae80e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:07 np0005603541 podman[78004]: 2026-01-31 06:48:07.09140031 +0000 UTC m=+0.229317647 container init c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 01:48:07 np0005603541 podman[78004]: 2026-01-31 06:48:07.1003423 +0000 UTC m=+0.238259597 container start c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 01:48:07 np0005603541 podman[78004]: 2026-01-31 06:48:07.112891418 +0000 UTC m=+0.250808725 container attach c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2199420163' entity='client.admin' 
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]: [
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:    {
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "available": false,
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "ceph_device": false,
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "lsm_data": {},
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "lvs": [],
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "path": "/dev/sr0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "rejected_reasons": [
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "Insufficient space (<5GB)",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "Has a FileSystem"
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        ],
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        "sys_api": {
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "actuators": null,
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "device_nodes": "sr0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "devname": "sr0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "human_readable_size": "482.00 KB",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "id_bus": "ata",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "model": "QEMU DVD-ROM",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "nr_requests": "2",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "parent": "/dev/sr0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "partitions": {},
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "path": "/dev/sr0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "removable": "1",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "rev": "2.5+",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "ro": "0",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "rotational": "1",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "sas_address": "",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "sas_device_handle": "",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "scheduler_mode": "mq-deadline",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "sectors": 0,
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "sectorsize": "2048",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "size": 493568.0,
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "support_discard": "2048",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "type": "disk",
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:            "vendor": "QEMU"
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:        }
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]:    }
Jan 31 01:48:07 np0005603541 adoring_lovelace[77973]: ]
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2872450868' entity='client.admin' 
Jan 31 01:48:07 np0005603541 systemd[1]: libpod-8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40.scope: Deactivated successfully.
Jan 31 01:48:07 np0005603541 systemd[1]: libpod-8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40.scope: Consumed 1.171s CPU time.
Jan 31 01:48:07 np0005603541 systemd[1]: libpod-c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240.scope: Deactivated successfully.
Jan 31 01:48:07 np0005603541 podman[78004]: 2026-01-31 06:48:07.704810102 +0000 UTC m=+0.842727399 container died c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:48:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-160a4fe24c302f8b126ccf0ae14e693a409b8994ed0726de9e11a47aa04ae80e-merged.mount: Deactivated successfully.
Jan 31 01:48:07 np0005603541 podman[78973]: 2026-01-31 06:48:07.769681805 +0000 UTC m=+0.062559557 container died 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 01:48:07 np0005603541 podman[78004]: 2026-01-31 06:48:07.795723813 +0000 UTC m=+0.933641110 container remove c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240 (image=quay.io/ceph/ceph:v18, name=compassionate_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 01:48:07 np0005603541 systemd[1]: libpod-conmon-c7f35be038d55d44d74c07b48d00a26a1e7b17104ba0bfa405990cf68ede8240.scope: Deactivated successfully.
Jan 31 01:48:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5bd07b698c23cff7d6fc9af16d7e99dc563cdcd5c97e5dd40a6cb570e36573c6-merged.mount: Deactivated successfully.
Jan 31 01:48:07 np0005603541 podman[78973]: 2026-01-31 06:48:07.893611485 +0000 UTC m=+0.186489207 container remove 8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 01:48:07 np0005603541 systemd[1]: libpod-conmon-8ece6d93f33ffd9883f612ae7a8fba2feb7e9bdaeaf10a0256b8b41b08d58c40.scope: Deactivated successfully.
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mgr[74648]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 01:48:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:08 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:48:08 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:48:08 np0005603541 ansible-async_wrapper.py[79198]: Invoked with j389807763781 30 /home/zuul/.ansible/tmp/ansible-tmp-1769842088.0989232-37303-124775516667158/AnsiballZ_command.py _
Jan 31 01:48:08 np0005603541 ansible-async_wrapper.py[79276]: Starting module and watcher
Jan 31 01:48:08 np0005603541 ansible-async_wrapper.py[79276]: Start watching 79277 (30)
Jan 31 01:48:08 np0005603541 ansible-async_wrapper.py[79277]: Start module (79277)
Jan 31 01:48:08 np0005603541 ansible-async_wrapper.py[79198]: Return async_wrapper task started.
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2872450868' entity='client.admin' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:48:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:08 np0005603541 python3[79279]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:08 np0005603541 podman[79374]: 2026-01-31 06:48:08.884396234 +0000 UTC m=+0.046148552 container create 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 01:48:08 np0005603541 systemd[1]: Started libpod-conmon-72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6.scope.
Jan 31 01:48:08 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:08 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024f855d2867256f0dc7284cdf3e6c05a3070fc7925bdcacd2065be13e646628/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:08 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024f855d2867256f0dc7284cdf3e6c05a3070fc7925bdcacd2065be13e646628/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:08 np0005603541 podman[79374]: 2026-01-31 06:48:08.864243504 +0000 UTC m=+0.025995852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:08 np0005603541 podman[79374]: 2026-01-31 06:48:08.97016462 +0000 UTC m=+0.131916958 container init 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:48:08 np0005603541 podman[79374]: 2026-01-31 06:48:08.974897447 +0000 UTC m=+0.136649765 container start 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 01:48:09 np0005603541 podman[79374]: 2026-01-31 06:48:09.072420066 +0000 UTC m=+0.234172384 container attach 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:09 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:09 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:09 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:48:09 np0005603541 interesting_shirley[79447]: 
Jan 31 01:48:09 np0005603541 interesting_shirley[79447]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 01:48:09 np0005603541 systemd[1]: libpod-72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6.scope: Deactivated successfully.
Jan 31 01:48:09 np0005603541 podman[79374]: 2026-01-31 06:48:09.522489524 +0000 UTC m=+0.684241852 container died 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:09 np0005603541 systemd[1]: var-lib-containers-storage-overlay-024f855d2867256f0dc7284cdf3e6c05a3070fc7925bdcacd2065be13e646628-merged.mount: Deactivated successfully.
Jan 31 01:48:09 np0005603541 podman[79374]: 2026-01-31 06:48:09.678475099 +0000 UTC m=+0.840227417 container remove 72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6 (image=quay.io/ceph/ceph:v18, name=interesting_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:09 np0005603541 systemd[1]: libpod-conmon-72fe1cf0b794b851b2c6520c573813aeebb933a8f1ee0ab4d4c5d57597c07cb6.scope: Deactivated successfully.
Jan 31 01:48:09 np0005603541 ansible-async_wrapper.py[79277]: Module complete (79277)
Jan 31 01:48:09 np0005603541 ceph-mon[74355]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:48:09 np0005603541 ceph-mon[74355]: Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:09 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:09 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:10 np0005603541 python3[80114]: ansible-ansible.legacy.async_status Invoked with jid=j389807763781.79198 mode=status _async_dir=/root/.ansible_async
Jan 31 01:48:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:10 np0005603541 python3[80300]: ansible-ansible.legacy.async_status Invoked with jid=j389807763781.79198 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 01:48:10 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:10 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:10 np0005603541 python3[80550]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:48:10 np0005603541 ceph-mon[74355]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:11 np0005603541 python3[80848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.326635298 +0000 UTC m=+0.093093471 container create 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.258233139 +0000 UTC m=+0.024691332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:11 np0005603541 systemd[1]: Started libpod-conmon-88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b.scope.
Jan 31 01:48:11 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649ce34817142adece11d4eba946bfbb90c519670cbea0546df350ca2e4776c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649ce34817142adece11d4eba946bfbb90c519670cbea0546df350ca2e4776c2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649ce34817142adece11d4eba946bfbb90c519670cbea0546df350ca2e4776c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.395582088 +0000 UTC m=+0.162040301 container init 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.403323032 +0000 UTC m=+0.169781215 container start 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.422356688 +0000 UTC m=+0.188814861 container attach 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 2af54f5f-0210-4981-ad4c-b5c16e86fa9d (Updating crash deployment (+1 -> 1))
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:11 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 01:48:11 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:48:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:48:11 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:48:11 np0005603541 optimistic_snyder[81014]: 
Jan 31 01:48:11 np0005603541 optimistic_snyder[81014]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 01:48:11 np0005603541 podman[80924]: 2026-01-31 06:48:11.918983594 +0000 UTC m=+0.685441767 container died 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:11 np0005603541 systemd[1]: libpod-88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b.scope: Deactivated successfully.
Jan 31 01:48:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-649ce34817142adece11d4eba946bfbb90c519670cbea0546df350ca2e4776c2-merged.mount: Deactivated successfully.
Jan 31 01:48:12 np0005603541 podman[80924]: 2026-01-31 06:48:12.110491314 +0000 UTC m=+0.876949507 container remove 88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b (image=quay.io/ceph/ceph:v18, name=optimistic_snyder, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 01:48:12 np0005603541 systemd[1]: libpod-conmon-88bb31811fa1b7fefa187e8e6fc91f30539fd466a33460bd827c706f079b780b.scope: Deactivated successfully.
Jan 31 01:48:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.278754834 +0000 UTC m=+0.100680431 container create 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.194165544 +0000 UTC m=+0.016091161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:12 np0005603541 systemd[1]: Started libpod-conmon-289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940.scope.
Jan 31 01:48:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.414409585 +0000 UTC m=+0.236335222 container init 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.419874098 +0000 UTC m=+0.241799695 container start 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:12 np0005603541 adoring_jepsen[81257]: 167 167
Jan 31 01:48:12 np0005603541 systemd[1]: libpod-289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940.scope: Deactivated successfully.
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.458373428 +0000 UTC m=+0.280299015 container attach 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.459084244 +0000 UTC m=+0.281009881 container died 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:48:12 np0005603541 python3[81284]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b3afb880dbb669fddd7c858646a53df2e4c0f5e1db04e358bcdec58238970488-merged.mount: Deactivated successfully.
Jan 31 01:48:12 np0005603541 podman[81240]: 2026-01-31 06:48:12.785700703 +0000 UTC m=+0.607626300 container remove 289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:48:12 np0005603541 systemd[1]: libpod-conmon-289de3534d0e22e96ed98c83144d818fb3a8d6a868713afbd6fb648299f50940.scope: Deactivated successfully.
Jan 31 01:48:12 np0005603541 podman[81299]: 2026-01-31 06:48:12.881323029 +0000 UTC m=+0.350039002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:13 np0005603541 ceph-mon[74355]: Deploying daemon crash.compute-0 on compute-0
Jan 31 01:48:13 np0005603541 podman[81299]: 2026-01-31 06:48:13.022888403 +0000 UTC m=+0.491604356 container create 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:13 np0005603541 systemd[1]: Started libpod-conmon-7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b.scope.
Jan 31 01:48:13 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fcee232eab9a6c1bbeea9570cdc8d843739b5f10f834c6b9cb329591fec634/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fcee232eab9a6c1bbeea9570cdc8d843739b5f10f834c6b9cb329591fec634/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11fcee232eab9a6c1bbeea9570cdc8d843739b5f10f834c6b9cb329591fec634/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:13 np0005603541 podman[81299]: 2026-01-31 06:48:13.317462245 +0000 UTC m=+0.786178248 container init 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:13 np0005603541 podman[81299]: 2026-01-31 06:48:13.321456484 +0000 UTC m=+0.790172427 container start 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:48:13 np0005603541 systemd[1]: Reloading.
Jan 31 01:48:13 np0005603541 podman[81299]: 2026-01-31 06:48:13.372999696 +0000 UTC m=+0.841715639 container attach 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:13 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:48:13 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:48:13 np0005603541 systemd[1]: Reloading.
Jan 31 01:48:13 np0005603541 ansible-async_wrapper.py[79276]: Done in kid B.
Jan 31 01:48:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:13 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:48:13 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:48:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 31 01:48:13 np0005603541 systemd[1]: Starting Ceph crash.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:48:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1928576354' entity='client.admin' 
Jan 31 01:48:13 np0005603541 systemd[1]: libpod-7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b.scope: Deactivated successfully.
Jan 31 01:48:13 np0005603541 podman[81299]: 2026-01-31 06:48:13.97868626 +0000 UTC m=+1.447402193 container died 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:14 np0005603541 systemd[1]: var-lib-containers-storage-overlay-11fcee232eab9a6c1bbeea9570cdc8d843739b5f10f834c6b9cb329591fec634-merged.mount: Deactivated successfully.
Jan 31 01:48:14 np0005603541 podman[81299]: 2026-01-31 06:48:14.505011192 +0000 UTC m=+1.973727145 container remove 7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b (image=quay.io/ceph/ceph:v18, name=intelligent_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 01:48:14 np0005603541 systemd[1]: libpod-conmon-7fc8e72a865b66c56a66621e980c00c880bc3fbd0500870d6bbe33bbc4cce45b.scope: Deactivated successfully.
Jan 31 01:48:14 np0005603541 podman[81478]: 2026-01-31 06:48:14.654234917 +0000 UTC m=+0.031265190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:14 np0005603541 podman[81478]: 2026-01-31 06:48:14.786674905 +0000 UTC m=+0.163705138 container create 3a506549d180ef0d1dcc0e03dda069389726387b87595053e75fce24cd50bffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 01:48:14 np0005603541 python3[81513]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f48301c8fcbfb68b89f0aece33dfe45c7123dc8d3e17a4afd877dd902e3bb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f48301c8fcbfb68b89f0aece33dfe45c7123dc8d3e17a4afd877dd902e3bb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f48301c8fcbfb68b89f0aece33dfe45c7123dc8d3e17a4afd877dd902e3bb0/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f48301c8fcbfb68b89f0aece33dfe45c7123dc8d3e17a4afd877dd902e3bb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 podman[81514]: 2026-01-31 06:48:15.047085094 +0000 UTC m=+0.187605053 container create 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1928576354' entity='client.admin' 
Jan 31 01:48:15 np0005603541 podman[81514]: 2026-01-31 06:48:14.986948081 +0000 UTC m=+0.127468070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:15 np0005603541 podman[81478]: 2026-01-31 06:48:15.344268566 +0000 UTC m=+0.721298829 container init 3a506549d180ef0d1dcc0e03dda069389726387b87595053e75fce24cd50bffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:48:15 np0005603541 podman[81478]: 2026-01-31 06:48:15.349093514 +0000 UTC m=+0.726123757 container start 3a506549d180ef0d1dcc0e03dda069389726387b87595053e75fce24cd50bffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 01:48:15 np0005603541 bash[81478]: 3a506549d180ef0d1dcc0e03dda069389726387b87595053e75fce24cd50bffa
Jan 31 01:48:15 np0005603541 systemd[1]: Started Ceph crash.compute-0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:48:15 np0005603541 systemd[1]: Started libpod-conmon-5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b.scope.
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf8ec4efbd94ebf15634343f79545802c88bcbe746be17d90d2a3c561f99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf8ec4efbd94ebf15634343f79545802c88bcbe746be17d90d2a3c561f99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9385bf8ec4efbd94ebf15634343f79545802c88bcbe746be17d90d2a3c561f99/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:15 np0005603541 podman[81514]: 2026-01-31 06:48:15.649157598 +0000 UTC m=+0.789677557 container init 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:15 np0005603541 podman[81514]: 2026-01-31 06:48:15.658264962 +0000 UTC m=+0.798784921 container start 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.750+0000 7fbfc3216640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.750+0000 7fbfc3216640 -1 AuthRegistry(0x7fbfbc067cf0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.751+0000 7fbfc3216640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.751+0000 7fbfc3216640 -1 AuthRegistry(0x7fbfc3215000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.752+0000 7fbfc0f8b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: 2026-01-31T06:48:15.752+0000 7fbfc3216640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 01:48:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-0[81531]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:15 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 2af54f5f-0210-4981-ad4c-b5c16e86fa9d (Updating crash deployment (+1 -> 1))
Jan 31 01:48:15 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 2af54f5f-0210-4981-ad4c-b5c16e86fa9d (Updating crash deployment (+1 -> 1)) in 4 seconds
Jan 31 01:48:15 np0005603541 podman[81514]: 2026-01-31 06:48:15.904319781 +0000 UTC m=+1.044839780 container attach 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:15 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev f025b051-374e-4db7-b03f-14e3566f2dff does not exist
Jan 31 01:48:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 5be00783-1c3c-41c6-817a-44ab5f3711b5 does not exist
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:48:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/649208161' entity='client.admin' 
Jan 31 01:48:16 np0005603541 systemd[1]: libpod-5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b.scope: Deactivated successfully.
Jan 31 01:48:16 np0005603541 podman[81514]: 2026-01-31 06:48:16.311172642 +0000 UTC m=+1.451692611 container died 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9385bf8ec4efbd94ebf15634343f79545802c88bcbe746be17d90d2a3c561f99-merged.mount: Deactivated successfully.
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:16 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/649208161' entity='client.admin' 
Jan 31 01:48:17 np0005603541 podman[81514]: 2026-01-31 06:48:17.020668987 +0000 UTC m=+2.161188976 container remove 5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b (image=quay.io/ceph/ceph:v18, name=modest_kalam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:48:17 np0005603541 systemd[1]: libpod-conmon-5b2ae6896c2896762a7b2586cdbabdf13bffa88e64cbc3b16067fd8c5db73d5b.scope: Deactivated successfully.
Jan 31 01:48:17 np0005603541 podman[81835]: 2026-01-31 06:48:17.340284378 +0000 UTC m=+0.067487308 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:48:17 np0005603541 python3[81820]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:17 np0005603541 podman[81835]: 2026-01-31 06:48:17.43388667 +0000 UTC m=+0.161089600 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:48:17 np0005603541 podman[81856]: 2026-01-31 06:48:17.743461098 +0000 UTC m=+0.353294515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:17 np0005603541 podman[81856]: 2026-01-31 06:48:17.916799091 +0000 UTC m=+0.526632478 container create 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:18 np0005603541 systemd[1]: Started libpod-conmon-4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627.scope.
Jan 31 01:48:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485149684d346739ca6fd94e58077411ca404a1b96ad4338ccc7091d31e8528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485149684d346739ca6fd94e58077411ca404a1b96ad4338ccc7091d31e8528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485149684d346739ca6fd94e58077411ca404a1b96ad4338ccc7091d31e8528/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:18 np0005603541 podman[81856]: 2026-01-31 06:48:18.139503178 +0000 UTC m=+0.749336635 container init 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:18 np0005603541 podman[81856]: 2026-01-31 06:48:18.147569239 +0000 UTC m=+0.757402596 container start 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:18 np0005603541 podman[81856]: 2026-01-31 06:48:18.231115906 +0000 UTC m=+0.840949263 container attach 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 43bb7c24-9a1a-4219-b839-307c8df76206 does not exist
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev afc52d22-5f6a-4ac9-b403-a184d534cd3d does not exist
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev ec29f061-79e8-4ebd-b7ec-b16c694bbe76 does not exist
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 1 completed events
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:48:18 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 31 01:48:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/696134221' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:18.992671622 +0000 UTC m=+0.018662488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.130670006 +0000 UTC m=+0.156660862 container create b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/696134221' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 01:48:19 np0005603541 systemd[1]: Started libpod-conmon-b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768.scope.
Jan 31 01:48:19 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.33670455 +0000 UTC m=+0.362695426 container init b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.341350664 +0000 UTC m=+0.367341520 container start b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 01:48:19 np0005603541 charming_neumann[82128]: 167 167
Jan 31 01:48:19 np0005603541 systemd[1]: libpod-b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768.scope: Deactivated successfully.
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.477884176 +0000 UTC m=+0.503875042 container attach b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.478248883 +0000 UTC m=+0.504239729 container died b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-847627b7c4af55d11d6a8de0466ab1953f080942e325dcf3b2d75bf319993d2d-merged.mount: Deactivated successfully.
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/696134221' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 01:48:19 np0005603541 gallant_elbakyan[81921]: set require_min_compat_client to mimic
Jan 31 01:48:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 01:48:19 np0005603541 systemd[1]: libpod-4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627.scope: Deactivated successfully.
Jan 31 01:48:19 np0005603541 podman[82112]: 2026-01-31 06:48:19.983904282 +0000 UTC m=+1.009895128 container remove b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:19 np0005603541 systemd[1]: libpod-conmon-b2a248d8a710d7f432664a9be2b3e766e3508b55d76c469c9c8cabd49ee19768.scope: Deactivated successfully.
Jan 31 01:48:20 np0005603541 podman[81856]: 2026-01-31 06:48:20.007812717 +0000 UTC m=+2.617646114 container died 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:20 np0005603541 systemd[1]: var-lib-containers-storage-overlay-2485149684d346739ca6fd94e58077411ca404a1b96ad4338ccc7091d31e8528-merged.mount: Deactivated successfully.
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/696134221' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:20 np0005603541 podman[81856]: 2026-01-31 06:48:20.574390298 +0000 UTC m=+3.184223655 container remove 4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627 (image=quay.io/ceph/ceph:v18, name=gallant_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:20 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.gghdjs (unknown last config time)...
Jan 31 01:48:20 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.gghdjs (unknown last config time)...
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:20 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:48:20 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:48:20 np0005603541 systemd[1]: libpod-conmon-4626586a13af638c86f41a8ec2babf279dad6b6b1091c12e5f66db5ce167d627.scope: Deactivated successfully.
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:20.937060242 +0000 UTC m=+0.021022401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.072870007 +0000 UTC m=+0.156832156 container create 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 01:48:21 np0005603541 systemd[1]: Started libpod-conmon-25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41.scope.
Jan 31 01:48:21 np0005603541 python3[82318]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:21 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.221308854 +0000 UTC m=+0.305271033 container init 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.226573671 +0000 UTC m=+0.310535830 container start 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:48:21 np0005603541 lucid_goldstine[82321]: 167 167
Jan 31 01:48:21 np0005603541 systemd[1]: libpod-25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41.scope: Deactivated successfully.
Jan 31 01:48:21 np0005603541 conmon[82321]: conmon 25184bf62244d788cfad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41.scope/container/memory.events
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.232139106 +0000 UTC m=+0.316101295 container attach 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:21 np0005603541 podman[82323]: 2026-01-31 06:48:21.236078684 +0000 UTC m=+0.034341999 container create d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.240874041 +0000 UTC m=+0.324836210 container died 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:48:21 np0005603541 systemd[1]: var-lib-containers-storage-overlay-94fc7458a97075a4694b446d0984248c53ea886d8ce00bcfb6d6f84c22d52491-merged.mount: Deactivated successfully.
Jan 31 01:48:21 np0005603541 systemd[1]: Started libpod-conmon-d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19.scope.
Jan 31 01:48:21 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2030baafc7757c249451ec745682e382ef9576b733824ecb7d5e0df8dcddc170/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2030baafc7757c249451ec745682e382ef9576b733824ecb7d5e0df8dcddc170/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2030baafc7757c249451ec745682e382ef9576b733824ecb7d5e0df8dcddc170/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:21 np0005603541 podman[82279]: 2026-01-31 06:48:21.28645575 +0000 UTC m=+0.370417909 container remove 25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:48:21 np0005603541 systemd[1]: libpod-conmon-25184bf62244d788cfad5e7c886eed93c50f17e3012dddabd5a1074c2455df41.scope: Deactivated successfully.
Jan 31 01:48:21 np0005603541 podman[82323]: 2026-01-31 06:48:21.300855951 +0000 UTC m=+0.099119266 container init d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:21 np0005603541 podman[82323]: 2026-01-31 06:48:21.305424453 +0000 UTC m=+0.103687758 container start d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 01:48:21 np0005603541 podman[82323]: 2026-01-31 06:48:21.308970603 +0000 UTC m=+0.107233938 container attach d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:48:21 np0005603541 podman[82323]: 2026-01-31 06:48:21.220547977 +0000 UTC m=+0.018811312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: Reconfiguring mgr.compute-0.gghdjs (unknown last config time)...
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:48:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 72804495-71db-4e9c-8ba9-365c3c9ae679 does not exist
Jan 31 01:48:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev a717944b-1698-4b64-b5e6-56ae340e95f8 does not exist
Jan 31 01:48:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d13af98f-cef4-4e72-8528-10597f739c37 does not exist
Jan 31 01:48:21 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Added host compute-0
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 4d7bca7a-4662-45c2-a656-90803930ebba does not exist
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 2011eb93-19d9-4fa8-ad6a-5e2dab417be9 does not exist
Jan 31 01:48:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 64351575-caa9-42e6-9146-b24c8f80f62b does not exist
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: Added host compute-0
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:23 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 31 01:48:23 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 31 01:48:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:25 np0005603541 ceph-mon[74355]: Deploying cephadm binary to compute-1
Jan 31 01:48:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:26 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Added host compute-1
Jan 31 01:48:26 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:28 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 31 01:48:28 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 31 01:48:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:28 np0005603541 ceph-mon[74355]: Added host compute-1
Jan 31 01:48:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:29 np0005603541 ceph-mon[74355]: Deploying cephadm binary to compute-2
Jan 31 01:48:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Added host compute-2
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Added host 'compute-1' with addr '192.168.122.101'
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Added host 'compute-2' with addr '192.168.122.102'
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Scheduled mon update...
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Scheduled mgr update...
Jan 31 01:48:31 np0005603541 modest_pasteur[82355]: Scheduled osd.default_drive_group update...
Jan 31 01:48:31 np0005603541 systemd[1]: libpod-d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19.scope: Deactivated successfully.
Jan 31 01:48:31 np0005603541 podman[82323]: 2026-01-31 06:48:31.221428333 +0000 UTC m=+10.019691658 container died d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:48:31 np0005603541 systemd[1]: var-lib-containers-storage-overlay-2030baafc7757c249451ec745682e382ef9576b733824ecb7d5e0df8dcddc170-merged.mount: Deactivated successfully.
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:31 np0005603541 podman[82323]: 2026-01-31 06:48:31.270470129 +0000 UTC m=+10.068733434 container remove d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19 (image=quay.io/ceph/ceph:v18, name=modest_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:31 np0005603541 systemd[1]: libpod-conmon-d6ffd4af53a6e2964d79dfe3a44d74f1b85c523407b2797dd4c689bc925d5f19.scope: Deactivated successfully.
Jan 31 01:48:31 np0005603541 python3[82643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:31 np0005603541 podman[82645]: 2026-01-31 06:48:31.759053777 +0000 UTC m=+0.057252680 container create d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 01:48:31 np0005603541 systemd[1]: Started libpod-conmon-d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1.scope.
Jan 31 01:48:31 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f4417186027cde99b08345a1b90b0b3c3908a013940fab8d5a6623f0c4e815/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f4417186027cde99b08345a1b90b0b3c3908a013940fab8d5a6623f0c4e815/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f4417186027cde99b08345a1b90b0b3c3908a013940fab8d5a6623f0c4e815/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:31 np0005603541 podman[82645]: 2026-01-31 06:48:31.722994462 +0000 UTC m=+0.021193385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:48:31 np0005603541 podman[82645]: 2026-01-31 06:48:31.822844843 +0000 UTC m=+0.121043766 container init d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:48:31 np0005603541 podman[82645]: 2026-01-31 06:48:31.829254277 +0000 UTC m=+0.127453180 container start d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 01:48:31 np0005603541 podman[82645]: 2026-01-31 06:48:31.832448918 +0000 UTC m=+0.130647901 container attach d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 01:48:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Added host compute-2
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 01:48:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099729819' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 01:48:32 np0005603541 focused_bhaskara[82662]: 
Jan 31 01:48:32 np0005603541 focused_bhaskara[82662]: {"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":93,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T06:46:56.268384+0000","services":{}},"progress_events":{}}
Jan 31 01:48:32 np0005603541 systemd[1]: libpod-d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1.scope: Deactivated successfully.
Jan 31 01:48:32 np0005603541 podman[82645]: 2026-01-31 06:48:32.743766052 +0000 UTC m=+1.041964945 container died d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-16f4417186027cde99b08345a1b90b0b3c3908a013940fab8d5a6623f0c4e815-merged.mount: Deactivated successfully.
Jan 31 01:48:32 np0005603541 podman[82645]: 2026-01-31 06:48:32.786119128 +0000 UTC m=+1.084318031 container remove d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1 (image=quay.io/ceph/ceph:v18, name=focused_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 31 01:48:32 np0005603541 systemd[1]: libpod-conmon-d4eb4ca7673a530e2f5dddfc0b9b2b51df49ffdb8ce7c0f93658c937b71973f1.scope: Deactivated successfully.
Jan 31 01:48:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:48:48
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] No pools available
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:48:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:48:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:49 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:48:49 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:48:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:48:50 np0005603541 ceph-mon[74355]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:48:51 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:51 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:51 np0005603541 ceph-mon[74355]: Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:48:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:52 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:52 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:52 np0005603541 ceph-mon[74355]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 1be9d50b-44cc-4df2-9ecd-a71e795d033e (Updating crash deployment (+1 -> 2))
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:48:53.329+0000 7f6ec06d9640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: service_name: mon
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: placement:
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  hosts:
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-0
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-1
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-2
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:48:53.330+0000 7f6ec06d9640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: service_name: mgr
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: placement:
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  hosts:
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-0
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-1
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]:  - compute-2
Jan 31 01:48:53 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 31 01:48:53 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:48:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: Deploying daemon crash.compute-1 on compute-1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 01:48:55 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:55 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 1be9d50b-44cc-4df2-9ecd-a71e795d033e (Updating crash deployment (+1 -> 2))
Jan 31 01:48:55 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 1be9d50b-44cc-4df2-9ecd-a71e795d033e (Updating crash deployment (+1 -> 2)) in 2 seconds
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:48:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.163881694 +0000 UTC m=+0.031790642 container create 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:48:56 np0005603541 systemd[1]: Started libpod-conmon-7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b.scope.
Jan 31 01:48:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.229581552 +0000 UTC m=+0.097490580 container init 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.234924671 +0000 UTC m=+0.102833619 container start 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.238581983 +0000 UTC m=+0.106490941 container attach 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:48:56 np0005603541 tender_brattain[82858]: 167 167
Jan 31 01:48:56 np0005603541 systemd[1]: libpod-7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b.scope: Deactivated successfully.
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.24068555 +0000 UTC m=+0.108594498 container died 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.149649666 +0000 UTC m=+0.017558634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a879e223fefb4a43fb738cb5b81ee40c3e023514d2cb4d47b695d3bc232c04ed-merged.mount: Deactivated successfully.
Jan 31 01:48:56 np0005603541 podman[82842]: 2026-01-31 06:48:56.279929917 +0000 UTC m=+0.147838865 container remove 7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brattain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:48:56 np0005603541 systemd[1]: libpod-conmon-7e4025f3082516229c12c07fbc1283f043dc064d1b0b7e474375531b0c15ec5b.scope: Deactivated successfully.
Jan 31 01:48:56 np0005603541 podman[82881]: 2026-01-31 06:48:56.400099831 +0000 UTC m=+0.042146172 container create ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 01:48:56 np0005603541 systemd[1]: Started libpod-conmon-ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a.scope.
Jan 31 01:48:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:48:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:48:56 np0005603541 podman[82881]: 2026-01-31 06:48:56.468781196 +0000 UTC m=+0.110827557 container init ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:48:56 np0005603541 podman[82881]: 2026-01-31 06:48:56.473351729 +0000 UTC m=+0.115398070 container start ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:48:56 np0005603541 podman[82881]: 2026-01-31 06:48:56.476426948 +0000 UTC m=+0.118473289 container attach ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 01:48:56 np0005603541 podman[82881]: 2026-01-31 06:48:56.383384258 +0000 UTC m=+0.025430609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:48:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: --> relative data size: 1.0
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b
Jan 31 01:48:57 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b"} v 0) v1
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1691019324' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b"}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1691019324' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b"}]': finished
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1691019324' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b"}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1691019324' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b"}]': finished
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42"} v 0) v1
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3095916902' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42"}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3095916902' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42"}]': finished
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:48:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:48:57 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:48:57 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 01:48:57 np0005603541 lvm[82945]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 01:48:57 np0005603541 lvm[82945]: VG ceph_vg0 finished
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 01:48:57 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595593260' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 01:48:58 np0005603541 interesting_bhabha[82898]: stderr: got monmap epoch 1
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2385253119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 01:48:58 np0005603541 interesting_bhabha[82898]: --> Creating keyring file for osd.0
Jan 31 01:48:58 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 01:48:58 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 01:48:58 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b --setuser ceph --setgroup ceph
Jan 31 01:48:58 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 2 completed events
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/3095916902' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42"}]: dispatch
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/3095916902' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42"}]': finished
Jan 31 01:48:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:48:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:48:59 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:48:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 01:48:59 np0005603541 ceph-mon[74355]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: stderr: 2026-01-31T06:48:58.302+0000 7fcbf5973740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: stderr: 2026-01-31T06:48:58.302+0000 7fcbf5973740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: stderr: 2026-01-31T06:48:58.302+0000 7fcbf5973740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: stderr: 2026-01-31T06:48:58.303+0000 7fcbf5973740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 01:49:00 np0005603541 interesting_bhabha[82898]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 01:49:00 np0005603541 systemd[1]: libpod-ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a.scope: Deactivated successfully.
Jan 31 01:49:00 np0005603541 systemd[1]: libpod-ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a.scope: Consumed 2.113s CPU time.
Jan 31 01:49:00 np0005603541 podman[83847]: 2026-01-31 06:49:00.870406534 +0000 UTC m=+0.022261129 container died ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 01:49:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5cbe82525a651672aa26988c9434082d063c7302d72e4bc3a4418bed3bb1488a-merged.mount: Deactivated successfully.
Jan 31 01:49:00 np0005603541 podman[83847]: 2026-01-31 06:49:00.916909993 +0000 UTC m=+0.068764568 container remove ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:49:00 np0005603541 systemd[1]: libpod-conmon-ed8c5e1ebb94f307ab95c6db030e32b2ba72809d3933a5dd27d16433b4bd214a.scope: Deactivated successfully.
Jan 31 01:49:01 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.426541151 +0000 UTC m=+0.036534107 container create 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:49:01 np0005603541 systemd[1]: Started libpod-conmon-412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc.scope.
Jan 31 01:49:01 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.499698076 +0000 UTC m=+0.109691082 container init 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.410247467 +0000 UTC m=+0.020240443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.510130119 +0000 UTC m=+0.120123085 container start 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:49:01 np0005603541 busy_chaplygin[84019]: 167 167
Jan 31 01:49:01 np0005603541 systemd[1]: libpod-412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc.scope: Deactivated successfully.
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.514130629 +0000 UTC m=+0.124123595 container attach 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.515220023 +0000 UTC m=+0.125212989 container died 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:01 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8d5f5159f69b966f3643cb583d962514575830fdfe9094b82dcceee9ab6b89a5-merged.mount: Deactivated successfully.
Jan 31 01:49:01 np0005603541 podman[84002]: 2026-01-31 06:49:01.545638002 +0000 UTC m=+0.155630958 container remove 412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:49:01 np0005603541 systemd[1]: libpod-conmon-412425e1004cbe41c9b3017e7e80928e5640a00bb9d6cfc74d7c7f5a121633dc.scope: Deactivated successfully.
Jan 31 01:49:01 np0005603541 podman[84043]: 2026-01-31 06:49:01.687059472 +0000 UTC m=+0.047894371 container create 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:49:01 np0005603541 systemd[1]: Started libpod-conmon-895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5.scope.
Jan 31 01:49:01 np0005603541 podman[84043]: 2026-01-31 06:49:01.658927204 +0000 UTC m=+0.019762203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:01 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8435c08c795cdf35b3ef09a3ef47332ce13a32fe7f4f444350cd2890ba98a81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8435c08c795cdf35b3ef09a3ef47332ce13a32fe7f4f444350cd2890ba98a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8435c08c795cdf35b3ef09a3ef47332ce13a32fe7f4f444350cd2890ba98a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8435c08c795cdf35b3ef09a3ef47332ce13a32fe7f4f444350cd2890ba98a81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:01 np0005603541 podman[84043]: 2026-01-31 06:49:01.78182639 +0000 UTC m=+0.142661329 container init 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:01 np0005603541 podman[84043]: 2026-01-31 06:49:01.791331513 +0000 UTC m=+0.152166412 container start 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:01 np0005603541 podman[84043]: 2026-01-31 06:49:01.79480454 +0000 UTC m=+0.155639489 container attach 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]: {
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:    "0": [
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:        {
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "devices": [
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "/dev/loop3"
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            ],
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "lv_name": "ceph_lv0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "lv_size": "7511998464",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "name": "ceph_lv0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "tags": {
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.cluster_name": "ceph",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.crush_device_class": "",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.encrypted": "0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.osd_id": "0",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.type": "block",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:                "ceph.vdo": "0"
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            },
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "type": "block",
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:            "vg_name": "ceph_vg0"
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:        }
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]:    ]
Jan 31 01:49:02 np0005603541 objective_meninsky[84060]: }
Jan 31 01:49:02 np0005603541 systemd[1]: libpod-895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5.scope: Deactivated successfully.
Jan 31 01:49:02 np0005603541 podman[84069]: 2026-01-31 06:49:02.519367391 +0000 UTC m=+0.019956236 container died 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 01:49:02 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c8435c08c795cdf35b3ef09a3ef47332ce13a32fe7f4f444350cd2890ba98a81-merged.mount: Deactivated successfully.
Jan 31 01:49:02 np0005603541 podman[84069]: 2026-01-31 06:49:02.560429489 +0000 UTC m=+0.061018314 container remove 895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:49:02 np0005603541 systemd[1]: libpod-conmon-895cf628023ddc35f8d70221b56670961e97c526e9c84d51ea1c9d9468e346d5.scope: Deactivated successfully.
Jan 31 01:49:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 01:49:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 01:49:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:02 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 01:49:02 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 01:49:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 01:49:03 np0005603541 python3[84209]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.057887004 +0000 UTC m=+0.032769252 container create 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 01:49:03 np0005603541 systemd[1]: Started libpod-conmon-965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0.scope.
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.112737851 +0000 UTC m=+0.044923855 container create 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b343942a9c41173ef79a9dd886c16834e9dcdfb6425971f8b6663eeff3de49/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b343942a9c41173ef79a9dd886c16834e9dcdfb6425971f8b6663eeff3de49/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b343942a9c41173ef79a9dd886c16834e9dcdfb6425971f8b6663eeff3de49/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 systemd[1]: Started libpod-conmon-93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f.scope.
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.042284286 +0000 UTC m=+0.017166574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.140114632 +0000 UTC m=+0.114996980 container init 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.148587941 +0000 UTC m=+0.123470229 container start 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.152465298 +0000 UTC m=+0.127347586 container attach 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.15790196 +0000 UTC m=+0.090088004 container init 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.162660506 +0000 UTC m=+0.094846510 container start 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:03 np0005603541 cranky_banzai[84283]: 167 167
Jan 31 01:49:03 np0005603541 systemd[1]: libpod-93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f.scope: Deactivated successfully.
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.166301047 +0000 UTC m=+0.098487071 container attach 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.166537783 +0000 UTC m=+0.098723797 container died 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:49:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-56b7f2b4ac018c0b8bd350f122df3985a1119ff37d949e87519386a5cba6bdb0-merged.mount: Deactivated successfully.
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.096969088 +0000 UTC m=+0.029155142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:03 np0005603541 podman[84262]: 2026-01-31 06:49:03.204436879 +0000 UTC m=+0.136622883 container remove 93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:03 np0005603541 systemd[1]: libpod-conmon-93d4be7d737eb81e6129a38f51001ade405d7a05582280fc0497a9f9b1be4c4f.scope: Deactivated successfully.
Jan 31 01:49:03 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:03 np0005603541 podman[84316]: 2026-01-31 06:49:03.394342253 +0000 UTC m=+0.042430159 container create 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 01:49:03 np0005603541 systemd[1]: Started libpod-conmon-8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc.scope.
Jan 31 01:49:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:03 np0005603541 podman[84316]: 2026-01-31 06:49:03.377144029 +0000 UTC m=+0.025231965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:03 np0005603541 podman[84316]: 2026-01-31 06:49:03.485221804 +0000 UTC m=+0.133309720 container init 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:03 np0005603541 podman[84316]: 2026-01-31 06:49:03.491094635 +0000 UTC m=+0.139182531 container start 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:03 np0005603541 podman[84316]: 2026-01-31 06:49:03.49445419 +0000 UTC m=+0.142542116 container attach 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:49:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 01:49:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724214770' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 01:49:03 np0005603541 inspiring_lewin[84278]: 
Jan 31 01:49:03 np0005603541 inspiring_lewin[84278]: {"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769842137,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T06:48:50.186539+0000","services":{}},"progress_events":{}}
Jan 31 01:49:03 np0005603541 systemd[1]: libpod-965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0.scope: Deactivated successfully.
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.761744153 +0000 UTC m=+0.736626421 container died 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:49:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-87b343942a9c41173ef79a9dd886c16834e9dcdfb6425971f8b6663eeff3de49-merged.mount: Deactivated successfully.
Jan 31 01:49:03 np0005603541 ceph-mon[74355]: Deploying daemon osd.0 on compute-0
Jan 31 01:49:03 np0005603541 podman[84235]: 2026-01-31 06:49:03.810513073 +0000 UTC m=+0.785395361 container remove 965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0 (image=quay.io/ceph/ceph:v18, name=inspiring_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 01:49:03 np0005603541 systemd[1]: libpod-conmon-965575735ed6dd01d628c6aa6d42dba95c0d55350d3d3cac19c1fd734dd5d5c0.scope: Deactivated successfully.
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:04 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test[84333]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 01:49:04 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test[84333]:                            [--no-systemd] [--no-tmpfs]
Jan 31 01:49:04 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test[84333]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 01:49:04 np0005603541 systemd[1]: libpod-8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc.scope: Deactivated successfully.
Jan 31 01:49:04 np0005603541 podman[84316]: 2026-01-31 06:49:04.158032778 +0000 UTC m=+0.806120704 container died 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:49:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c8745e3aba2f7a24d54ade0776eb8184ef8f4aee9a4307b5c9e131a15b02d35c-merged.mount: Deactivated successfully.
Jan 31 01:49:04 np0005603541 podman[84316]: 2026-01-31 06:49:04.204100077 +0000 UTC m=+0.852187983 container remove 8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:49:04 np0005603541 systemd[1]: libpod-conmon-8586ec9ded72af4a9857b3048c91dc26beea52c836dccfcedd507b5213674cdc.scope: Deactivated successfully.
Jan 31 01:49:04 np0005603541 systemd[1]: Reloading.
Jan 31 01:49:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:49:04 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:04 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 31 01:49:04 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 31 01:49:04 np0005603541 systemd[1]: Reloading.
Jan 31 01:49:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:49:04 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:49:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 01:49:04 np0005603541 systemd[1]: Starting Ceph osd.0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.063285437 +0000 UTC m=+0.033067380 container create b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.12738437 +0000 UTC m=+0.097166333 container init b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.132845781 +0000 UTC m=+0.102627734 container start b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.136958723 +0000 UTC m=+0.106740666 container attach b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.049279134 +0000 UTC m=+0.019061077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:05 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:05 np0005603541 ceph-mon[74355]: Deploying daemon osd.1 on compute-1
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:05 np0005603541 bash[84525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 01:49:05 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate[84540]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 01:49:05 np0005603541 bash[84525]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 01:49:05 np0005603541 systemd[1]: libpod-b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35.scope: Deactivated successfully.
Jan 31 01:49:05 np0005603541 podman[84525]: 2026-01-31 06:49:05.971763778 +0000 UTC m=+0.941545731 container died b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay-dfd270aada186dd4628831f69a42023976bc9662cc352f332166ec2138cf7ddf-merged.mount: Deactivated successfully.
Jan 31 01:49:06 np0005603541 podman[84525]: 2026-01-31 06:49:06.013778967 +0000 UTC m=+0.983560910 container remove b9078596e8b8a062aefc5cf77a7cd00ca48be6e42c1e3daf5f7e68100695bf35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:06 np0005603541 podman[84723]: 2026-01-31 06:49:06.153225152 +0000 UTC m=+0.033820607 container create 5ff5a7342e79017dfd363c4b769cd462bd19c80ea1a76bb561d285634d9bf82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:49:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ea74e3a2ab510a48a77a355f099f042a4d318ccc22816cb81a3553623e54c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ea74e3a2ab510a48a77a355f099f042a4d318ccc22816cb81a3553623e54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ea74e3a2ab510a48a77a355f099f042a4d318ccc22816cb81a3553623e54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ea74e3a2ab510a48a77a355f099f042a4d318ccc22816cb81a3553623e54c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ea74e3a2ab510a48a77a355f099f042a4d318ccc22816cb81a3553623e54c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:06 np0005603541 podman[84723]: 2026-01-31 06:49:06.201355998 +0000 UTC m=+0.081951443 container init 5ff5a7342e79017dfd363c4b769cd462bd19c80ea1a76bb561d285634d9bf82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:49:06 np0005603541 podman[84723]: 2026-01-31 06:49:06.204997799 +0000 UTC m=+0.085593224 container start 5ff5a7342e79017dfd363c4b769cd462bd19c80ea1a76bb561d285634d9bf82d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:06 np0005603541 bash[84723]: 5ff5a7342e79017dfd363c4b769cd462bd19c80ea1a76bb561d285634d9bf82d
Jan 31 01:49:06 np0005603541 podman[84723]: 2026-01-31 06:49:06.136817056 +0000 UTC m=+0.017412501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:06 np0005603541 systemd[1]: Started Ceph osd.0 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:49:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:49:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:49:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: pidfile_write: ignore empty --pid-file
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be64a45800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be64a45800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be64a45800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be64a45800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be6587d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be6587d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be6587d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be6587d800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be6587d800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be64a45800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.710549826 +0000 UTC m=+0.036747912 container create 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:06 np0005603541 systemd[1]: Started libpod-conmon-0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923.scope.
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 01:49:06 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: load: jerasure load: lrc 
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 01:49:06 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.694570129 +0000 UTC m=+0.020768245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.794992774 +0000 UTC m=+0.121190880 container init 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.80023729 +0000 UTC m=+0.126435376 container start 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.803186076 +0000 UTC m=+0.129384182 container attach 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:49:06 np0005603541 systemd[1]: libpod-0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923.scope: Deactivated successfully.
Jan 31 01:49:06 np0005603541 objective_nobel[84915]: 167 167
Jan 31 01:49:06 np0005603541 conmon[84915]: conmon 0ddc0aba17da7f81d466 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923.scope/container/memory.events
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.807470721 +0000 UTC m=+0.133668807 container died 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:06 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d12eb36260e3204d1175edf8f116a36d98ed014edb8fdcb026d9b4cd7352bfb0-merged.mount: Deactivated successfully.
Jan 31 01:49:06 np0005603541 podman[84899]: 2026-01-31 06:49:06.845646455 +0000 UTC m=+0.171844571 container remove 0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:06 np0005603541 systemd[1]: libpod-conmon-0ddc0aba17da7f81d466419bf53cfe0a960a02d81e31cf41cd877b90379ef923.scope: Deactivated successfully.
Jan 31 01:49:06 np0005603541 podman[84946]: 2026-01-31 06:49:06.971978528 +0000 UTC m=+0.034892521 container create 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:07 np0005603541 systemd[1]: Started libpod-conmon-58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977.scope.
Jan 31 01:49:07 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ce11084b482c5651418dd1fa02a91e2bb41f6d20f26a299640fec5b1472d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ce11084b482c5651418dd1fa02a91e2bb41f6d20f26a299640fec5b1472d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ce11084b482c5651418dd1fa02a91e2bb41f6d20f26a299640fec5b1472d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285ce11084b482c5651418dd1fa02a91e2bb41f6d20f26a299640fec5b1472d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:07.039835954 +0000 UTC m=+0.102749977 container init 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:07.0463518 +0000 UTC m=+0.109265783 container start 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:06.955559721 +0000 UTC m=+0.018473714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:07.052651461 +0000 UTC m=+0.115565464 container attach 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:07 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658fec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs mount
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs mount shared_bdev_used = 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: RocksDB version: 7.9.2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Git sha 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DB SUMMARY
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DB Session ID:  J2I1AX6H8BL92T2KAQB6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: CURRENT file:  CURRENT
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.error_if_exists: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.create_if_missing: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                     Options.env: 0x55be658cfc70
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                Options.info_log: 0x55be64ac2ba0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.statistics: (nil)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.use_fsync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.db_log_dir: 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.write_buffer_manager: 0x55be659d8460
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.unordered_write: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.row_cache: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.wal_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.two_write_queues: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.wal_compression: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.atomic_flush: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_background_jobs: 4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_background_compactions: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_subcompactions: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.max_open_files: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Compression algorithms supported:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZSTD supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kXpressCompression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZlibCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b4a83f9d-0d5c-48d8-922c-0bd0157812bc
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147356862, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147356988, "job": 1, "event": "recovery_finished"}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: freelist init
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: freelist _read_cfg
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs umount
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bdev(0x55be658ff400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs mount
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluefs mount shared_bdev_used = 4718592
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: RocksDB version: 7.9.2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Git sha 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DB SUMMARY
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DB Session ID:  J2I1AX6H8BL92T2KAQB7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: CURRENT file:  CURRENT
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.error_if_exists: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.create_if_missing: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                     Options.env: 0x55be64b043f0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                Options.info_log: 0x55be64a9f580
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.statistics: (nil)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.use_fsync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.db_log_dir: 
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.write_buffer_manager: 0x55be659d8960
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.unordered_write: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.row_cache: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                              Options.wal_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.two_write_queues: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.wal_compression: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.atomic_flush: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_background_jobs: 4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_background_compactions: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_subcompactions: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.max_open_files: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Compression algorithms supported:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZSTD supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kXpressCompression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kZlibCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab8f30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:           Options.merge_operator: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55be64ac3100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55be64ab9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.compression: LZ4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.num_levels: 7
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.bloom_locality: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                               Options.ttl: 2592000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                       Options.enable_blob_files: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                           Options.min_blob_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b4a83f9d-0d5c-48d8-922c-0bd0157812bc
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147638106, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147643534, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a83f9d-0d5c-48d8-922c-0bd0157812bc", "db_session_id": "J2I1AX6H8BL92T2KAQB7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147646723, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a83f9d-0d5c-48d8-922c-0bd0157812bc", "db_session_id": "J2I1AX6H8BL92T2KAQB7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147649795, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a83f9d-0d5c-48d8-922c-0bd0157812bc", "db_session_id": "J2I1AX6H8BL92T2KAQB7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842147651413, "job": 1, "event": "recovery_finished"}
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55be64b76700
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: DB pointer 0x55be659c1a00
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 460.80 MB usag
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: _get_class not permitted to load lua
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: _get_class not permitted to load sdk
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: _get_class not permitted to load test_remote_reads
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 load_pgs
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 load_pgs opened 0 pgs
Jan 31 01:49:07 np0005603541 ceph-osd[84743]: osd.0 0 log_to_monitors true
Jan 31 01:49:07 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0[84739]: 2026-01-31T06:49:07.678+0000 7f2eb5e75740 -1 osd.0 0 log_to_monitors true
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]: {
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:        "osd_id": 0,
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:        "type": "bluestore"
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]:    }
Jan 31 01:49:07 np0005603541 amazing_almeida[84963]: }
Jan 31 01:49:07 np0005603541 systemd[1]: libpod-58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977.scope: Deactivated successfully.
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:07.825403439 +0000 UTC m=+0.888317412 container died 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-285ce11084b482c5651418dd1fa02a91e2bb41f6d20f26a299640fec5b1472d1-merged.mount: Deactivated successfully.
Jan 31 01:49:07 np0005603541 podman[84946]: 2026-01-31 06:49:07.878813452 +0000 UTC m=+0.941727425 container remove 58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 31 01:49:07 np0005603541 systemd[1]: libpod-conmon-58730016ea0d7fd12adaf22690391b4d9aff1d7f833dba80788977861d941977.scope: Deactivated successfully.
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:49:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:08 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:08 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:08 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 01:49:08 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 done with init, starting boot process
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 start_boot
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 01:49:09 np0005603541 ceph-osd[84743]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:09 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:09 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 01:49:09 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1104798728; not ready for session (expect reconnect)
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:09 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:09 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 31 01:49:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:10 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1104798728; not ready for session (expect reconnect)
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:10 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: from='osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1104798728; not ready for session (expect reconnect)
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 01:49:11 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 01:49:11 np0005603541 podman[85635]: 2026-01-31 06:49:11.379830545 +0000 UTC m=+0.071985919 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:11 np0005603541 podman[85635]: 2026-01-31 06:49:11.469292645 +0000 UTC m=+0.161447989 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:49:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.398 iops: 8549.791 elapsed_sec: 0.351
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: log_channel(cluster) log [WRN] : OSD bench result of 8549.791224 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 0 waiting for initial osdmap
Jan 31 01:49:12 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0[84739]: 2026-01-31T06:49:12.190+0000 7f2eb1df5640 -1 osd.0 0 waiting for initial osdmap
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 31 01:49:12 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-0[84739]: 2026-01-31T06:49:12.223+0000 7f2ead41d640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 01:49:12 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1104798728; not ready for session (expect reconnect)
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:12 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 01:49:12 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:12 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728] boot
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:12 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.767749809 +0000 UTC m=+0.035482173 container create 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 01:49:12 np0005603541 ceph-osd[84743]: osd.0 10 state: booting -> active
Jan 31 01:49:12 np0005603541 systemd[1]: Started libpod-conmon-6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6.scope.
Jan 31 01:49:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.751448856 +0000 UTC m=+0.019181220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.852707398 +0000 UTC m=+0.120439812 container init 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.859259145 +0000 UTC m=+0.126991489 container start 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.863142502 +0000 UTC m=+0.130874896 container attach 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:12 np0005603541 zen_fermat[86012]: 167 167
Jan 31 01:49:12 np0005603541 systemd[1]: libpod-6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6.scope: Deactivated successfully.
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.866794653 +0000 UTC m=+0.134527007 container died 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:49:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-761be9260a1bdf6296ca62336b5421f019b0c643ed624a813b8bb4627d7f80d9-merged.mount: Deactivated successfully.
Jan 31 01:49:12 np0005603541 podman[85996]: 2026-01-31 06:49:12.89934695 +0000 UTC m=+0.167079314 container remove 6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:12 np0005603541 systemd[1]: libpod-conmon-6bbdcf40220e0466964c33c3fb3825e2a6dfe16c998e26e1b2a193ee1b4859e6.scope: Deactivated successfully.
Jan 31 01:49:13 np0005603541 podman[86037]: 2026-01-31 06:49:13.03985058 +0000 UTC m=+0.053018436 container create ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:13 np0005603541 systemd[1]: Started libpod-conmon-ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7.scope.
Jan 31 01:49:13 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c605802759a95f50586e088e1442839b93adf25f332a5c203f543703064594/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c605802759a95f50586e088e1442839b93adf25f332a5c203f543703064594/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c605802759a95f50586e088e1442839b93adf25f332a5c203f543703064594/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:13 np0005603541 podman[86037]: 2026-01-31 06:49:13.013411159 +0000 UTC m=+0.026579115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:49:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c605802759a95f50586e088e1442839b93adf25f332a5c203f543703064594/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:13 np0005603541 podman[86037]: 2026-01-31 06:49:13.140067539 +0000 UTC m=+0.153235395 container init ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:49:13 np0005603541 podman[86037]: 2026-01-31 06:49:13.147361033 +0000 UTC m=+0.160528889 container start ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:13 np0005603541 podman[86037]: 2026-01-31 06:49:13.150987093 +0000 UTC m=+0.164154969 container attach ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 01:49:13 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:13 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:13 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:13 np0005603541 ceph-mon[74355]: OSD bench result of 8549.791224 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 01:49:13 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:13 np0005603541 ceph-mon[74355]: osd.0 [v2:192.168.122.100:6802/1104798728,v1:192.168.122.100:6803/1104798728] boot
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]: [
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:    {
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "available": false,
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "ceph_device": false,
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "lsm_data": {},
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "lvs": [],
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "path": "/dev/sr0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "rejected_reasons": [
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "Has a FileSystem",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "Insufficient space (<5GB)"
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        ],
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        "sys_api": {
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "actuators": null,
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "device_nodes": "sr0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "devname": "sr0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "human_readable_size": "482.00 KB",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "id_bus": "ata",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "model": "QEMU DVD-ROM",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "nr_requests": "2",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "parent": "/dev/sr0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "partitions": {},
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "path": "/dev/sr0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "removable": "1",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "rev": "2.5+",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "ro": "0",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "rotational": "1",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "sas_address": "",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "sas_device_handle": "",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "scheduler_mode": "mq-deadline",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "sectors": 0,
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "sectorsize": "2048",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "size": 493568.0,
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "support_discard": "2048",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "type": "disk",
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:            "vendor": "QEMU"
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:        }
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]:    }
Jan 31 01:49:14 np0005603541 peaceful_hellman[86053]: ]
Jan 31 01:49:14 np0005603541 systemd[1]: libpod-ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7.scope: Deactivated successfully.
Jan 31 01:49:14 np0005603541 podman[86037]: 2026-01-31 06:49:14.117748268 +0000 UTC m=+1.130916124 container died ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 01:49:14 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d3c605802759a95f50586e088e1442839b93adf25f332a5c203f543703064594-merged.mount: Deactivated successfully.
Jan 31 01:49:14 np0005603541 podman[86037]: 2026-01-31 06:49:14.166564842 +0000 UTC m=+1.179732698 container remove ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:14 np0005603541 systemd[1]: libpod-conmon-ad25a81448d63bdada4f9238a79c2fcbe09c1a8bcec1aa869e20544298f111b7.scope: Deactivated successfully.
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] creating mgr pool
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:14 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 31 01:49:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 01:49:14 np0005603541 ceph-osd[84743]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 01:49:14 np0005603541 ceph-osd[84743]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 01:49:14 np0005603541 ceph-osd[84743]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 01:49:15 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:15 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:15 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:15 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:16 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 01:49:16 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:16 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:17 np0005603541 ceph-mon[74355]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:17 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:17 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:17 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:18 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:19 np0005603541 ceph-mon[74355]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:49:19 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:19 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:19 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:20 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:20 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:20 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 01:49:20 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 01:49:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:21 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:21 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:21 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:22 np0005603541 ceph-mon[74355]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 01:49:22 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:22 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:23 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:23 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:23 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:24 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:24 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:25 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:25 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:25 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:26 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:26 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:27 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:27 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:27 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:28 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:28 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:29 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:29 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:29 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:30 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:30 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:31 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:31 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:31 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:32 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:32 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:33 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:33 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:33 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:33 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:34 np0005603541 python3[87052]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.074623602 +0000 UTC m=+0.041764028 container create 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 01:49:34 np0005603541 systemd[1]: Started libpod-conmon-29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481.scope.
Jan 31 01:49:34 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e87c128ea4429945654372a783674df2d0f4dd79c8101731edde65e7069b53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e87c128ea4429945654372a783674df2d0f4dd79c8101731edde65e7069b53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e87c128ea4429945654372a783674df2d0f4dd79c8101731edde65e7069b53/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.051565772 +0000 UTC m=+0.018706208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.149802576 +0000 UTC m=+0.116943012 container init 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.160518801 +0000 UTC m=+0.127659217 container start 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.164053146 +0000 UTC m=+0.131193602 container attach 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:34 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:34 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:34 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 01:49:34 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664593526' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 01:49:34 np0005603541 affectionate_burnell[87068]: 
Jan 31 01:49:34 np0005603541 affectionate_burnell[87068]: {"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":156,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":1,"osd_up_since":1769842152,"num_in_osds":2,"osd_in_since":1769842137,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447016960,"bytes_avail":7064981504,"bytes_total":7511998464,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T06:48:50.186539+0000","services":{}},"progress_events":{}}
Jan 31 01:49:34 np0005603541 systemd[1]: libpod-29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481.scope: Deactivated successfully.
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.777540113 +0000 UTC m=+0.744680529 container died 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-76e87c128ea4429945654372a783674df2d0f4dd79c8101731edde65e7069b53-merged.mount: Deactivated successfully.
Jan 31 01:49:34 np0005603541 podman[87054]: 2026-01-31 06:49:34.81432637 +0000 UTC m=+0.781466776 container remove 29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481 (image=quay.io/ceph/ceph:v18, name=affectionate_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:49:34 np0005603541 systemd[1]: libpod-conmon-29b6d3273ca281fad990420884fbbc54fe5035e1b8d15e6a557fef4ab572a481.scope: Deactivated successfully.
Jan 31 01:49:35 np0005603541 python3[87130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:35 np0005603541 podman[87131]: 2026-01-31 06:49:35.305286705 +0000 UTC m=+0.036610945 container create 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:49:35 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:35 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:35 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 01:49:35 np0005603541 systemd[1]: Started libpod-conmon-94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e.scope.
Jan 31 01:49:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8fa51eb938fb235555117cadb61910416afce55004b9501692bd76e8c3d755/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8fa51eb938fb235555117cadb61910416afce55004b9501692bd76e8c3d755/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:35 np0005603541 podman[87131]: 2026-01-31 06:49:35.290084442 +0000 UTC m=+0.021408702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:35 np0005603541 podman[87131]: 2026-01-31 06:49:35.387616059 +0000 UTC m=+0.118940319 container init 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:49:35 np0005603541 podman[87131]: 2026-01-31 06:49:35.392332742 +0000 UTC m=+0.123656982 container start 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:49:35 np0005603541 podman[87131]: 2026-01-31 06:49:35.39557467 +0000 UTC m=+0.126898930 container attach 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:49:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234462672' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_mclock_max_capacity_iops_hdd}] v 0) v1
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' 
Jan 31 01:49:36 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2634111835; not ready for session (expect reconnect)
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:36 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/234462672' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: from='osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835]' entity='osd.1' 
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234462672' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 01:49:36 np0005603541 sweet_rubin[87146]: pool 'vms' created
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835] boot
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 01:49:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 01:49:36 np0005603541 systemd[1]: libpod-94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e.scope: Deactivated successfully.
Jan 31 01:49:36 np0005603541 podman[87131]: 2026-01-31 06:49:36.40145384 +0000 UTC m=+1.132778090 container died 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:36 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1e8fa51eb938fb235555117cadb61910416afce55004b9501692bd76e8c3d755-merged.mount: Deactivated successfully.
Jan 31 01:49:36 np0005603541 podman[87131]: 2026-01-31 06:49:36.437506211 +0000 UTC m=+1.168830451 container remove 94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e (image=quay.io/ceph/ceph:v18, name=sweet_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:36 np0005603541 systemd[1]: libpod-conmon-94d718bdadfe1eb69d42905a983951f1166ac8ba91509ff9b57a4f4595537d2e.scope: Deactivated successfully.
Jan 31 01:49:36 np0005603541 python3[87210]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:36 np0005603541 podman[87211]: 2026-01-31 06:49:36.760479586 +0000 UTC m=+0.052823521 container create 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 01:49:36 np0005603541 systemd[1]: Started libpod-conmon-7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025.scope.
Jan 31 01:49:36 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520e32c1ddb16b72df0640e8d9cf81544088988457644634f74ce41a795eefc4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520e32c1ddb16b72df0640e8d9cf81544088988457644634f74ce41a795eefc4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:36 np0005603541 podman[87211]: 2026-01-31 06:49:36.813956223 +0000 UTC m=+0.106300148 container init 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:49:36 np0005603541 podman[87211]: 2026-01-31 06:49:36.817560338 +0000 UTC m=+0.109904273 container start 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:36 np0005603541 podman[87211]: 2026-01-31 06:49:36.820758194 +0000 UTC m=+0.113102169 container attach 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:36 np0005603541 podman[87211]: 2026-01-31 06:49:36.741951644 +0000 UTC m=+0.034295609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:37 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 2 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 01:49:37 np0005603541 nervous_bose[87227]: pool 'volumes' created
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/234462672' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835] boot
Jan 31 01:49:37 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:37 np0005603541 systemd[1]: libpod-7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025.scope: Deactivated successfully.
Jan 31 01:49:37 np0005603541 podman[87211]: 2026-01-31 06:49:37.40272157 +0000 UTC m=+0.695065495 container died 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:37 np0005603541 systemd[1]: var-lib-containers-storage-overlay-520e32c1ddb16b72df0640e8d9cf81544088988457644634f74ce41a795eefc4-merged.mount: Deactivated successfully.
Jan 31 01:49:37 np0005603541 podman[87211]: 2026-01-31 06:49:37.435551624 +0000 UTC m=+0.727895549 container remove 7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025 (image=quay.io/ceph/ceph:v18, name=nervous_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:37 np0005603541 systemd[1]: libpod-conmon-7b22c70b8f04cb96d53fa3e27aca6a9f9cba57bacb25f978ff858ac80a334025.scope: Deactivated successfully.
Jan 31 01:49:37 np0005603541 python3[87292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:37 np0005603541 podman[87293]: 2026-01-31 06:49:37.71702507 +0000 UTC m=+0.040525068 container create 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:37 np0005603541 systemd[1]: Started libpod-conmon-009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8.scope.
Jan 31 01:49:37 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a36b4fa17921dbebc631087dd43bf7048a66c1d4b0c3424b12f044b34644524/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a36b4fa17921dbebc631087dd43bf7048a66c1d4b0c3424b12f044b34644524/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:37 np0005603541 podman[87293]: 2026-01-31 06:49:37.766190243 +0000 UTC m=+0.089690271 container init 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 01:49:37 np0005603541 podman[87293]: 2026-01-31 06:49:37.773684191 +0000 UTC m=+0.097184189 container start 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:49:37 np0005603541 podman[87293]: 2026-01-31 06:49:37.777219126 +0000 UTC m=+0.100719134 container attach 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:37 np0005603541 podman[87293]: 2026-01-31 06:49:37.697222617 +0000 UTC m=+0.020722665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:37 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:37 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 01:49:38 np0005603541 strange_feynman[87308]: pool 'backups' created
Jan 31 01:49:38 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 31 01:49:38 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:38 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:38 np0005603541 systemd[1]: libpod-009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8.scope: Deactivated successfully.
Jan 31 01:49:38 np0005603541 podman[87293]: 2026-01-31 06:49:38.426157539 +0000 UTC m=+0.749657587 container died 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:49:38 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4a36b4fa17921dbebc631087dd43bf7048a66c1d4b0c3424b12f044b34644524-merged.mount: Deactivated successfully.
Jan 31 01:49:38 np0005603541 podman[87293]: 2026-01-31 06:49:38.466913121 +0000 UTC m=+0.790413129 container remove 009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8 (image=quay.io/ceph/ceph:v18, name=strange_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 01:49:38 np0005603541 systemd[1]: libpod-conmon-009d5b7049ab13cae87dfbeccbe7db92a53ae486f7cca8d481980f7f3ff977b8.scope: Deactivated successfully.
Jan 31 01:49:38 np0005603541 python3[87373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:38 np0005603541 podman[87384]: 2026-01-31 06:49:38.875589103 +0000 UTC m=+0.048818916 container create b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:49:38 np0005603541 systemd[1]: Started libpod-conmon-b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6.scope.
Jan 31 01:49:38 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d7057db5e9f63c3605a67f401a0595f18a61d0597060b61c0008e07e5d2df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d7057db5e9f63c3605a67f401a0595f18a61d0597060b61c0008e07e5d2df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:38 np0005603541 podman[87384]: 2026-01-31 06:49:38.934108889 +0000 UTC m=+0.107338742 container init b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:38 np0005603541 podman[87384]: 2026-01-31 06:49:38.940246975 +0000 UTC m=+0.113476778 container start b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 01:49:38 np0005603541 podman[87384]: 2026-01-31 06:49:38.943931284 +0000 UTC m=+0.117161087 container attach b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 01:49:38 np0005603541 podman[87384]: 2026-01-31 06:49:38.851013826 +0000 UTC m=+0.024243709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:38 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] Check health
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:39 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v63: 4 pgs: 1 unknown, 3 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 31 01:49:39 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 31 01:49:40 np0005603541 charming_wilbur[87399]: pool 'images' created
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 31 01:49:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:40 np0005603541 systemd[1]: libpod-b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6.scope: Deactivated successfully.
Jan 31 01:49:40 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gghdjs(active, since 112s)
Jan 31 01:49:40 np0005603541 podman[87384]: 2026-01-31 06:49:40.450424729 +0000 UTC m=+1.623654552 container died b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:40 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d43d7057db5e9f63c3605a67f401a0595f18a61d0597060b61c0008e07e5d2df-merged.mount: Deactivated successfully.
Jan 31 01:49:40 np0005603541 podman[87384]: 2026-01-31 06:49:40.500521884 +0000 UTC m=+1.673751687 container remove b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6 (image=quay.io/ceph/ceph:v18, name=charming_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:49:40 np0005603541 systemd[1]: libpod-conmon-b80354edaab7067708fd6c5c2ecd6fd4d7bccaaa1c1ca74965a24309c6922de6.scope: Deactivated successfully.
Jan 31 01:49:40 np0005603541 python3[87467]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:40 np0005603541 podman[87468]: 2026-01-31 06:49:40.825774044 +0000 UTC m=+0.060443033 container create 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 01:49:40 np0005603541 systemd[1]: Started libpod-conmon-4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee.scope.
Jan 31 01:49:40 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14885d283712d1f1b4779833da6e789a9ca771a0aa91bb71568ac1f5402664e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14885d283712d1f1b4779833da6e789a9ca771a0aa91bb71568ac1f5402664e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:40 np0005603541 podman[87468]: 2026-01-31 06:49:40.806220388 +0000 UTC m=+0.040889397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:40 np0005603541 podman[87468]: 2026-01-31 06:49:40.903690214 +0000 UTC m=+0.138359213 container init 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:49:40 np0005603541 podman[87468]: 2026-01-31 06:49:40.91196559 +0000 UTC m=+0.146634559 container start 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:49:40 np0005603541 podman[87468]: 2026-01-31 06:49:40.915644708 +0000 UTC m=+0.150313697 container attach 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 01:49:41 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v66: 5 pgs: 2 unknown, 3 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 31 01:49:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 18 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:41 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 31 01:49:42 np0005603541 festive_benz[87482]: pool 'cephfs.cephfs.meta' created
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:42 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:42 np0005603541 systemd[1]: libpod-4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee.scope: Deactivated successfully.
Jan 31 01:49:42 np0005603541 podman[87468]: 2026-01-31 06:49:42.458754617 +0000 UTC m=+1.693423676 container died 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 01:49:42 np0005603541 systemd[1]: var-lib-containers-storage-overlay-14885d283712d1f1b4779833da6e789a9ca771a0aa91bb71568ac1f5402664e6-merged.mount: Deactivated successfully.
Jan 31 01:49:42 np0005603541 podman[87468]: 2026-01-31 06:49:42.511092736 +0000 UTC m=+1.745761715 container remove 4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee (image=quay.io/ceph/ceph:v18, name=festive_benz, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:49:42 np0005603541 systemd[1]: libpod-conmon-4c282634b76b2101018e0ab29f3383e586672f04619dd63da312ab87b14a15ee.scope: Deactivated successfully.
Jan 31 01:49:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:42 np0005603541 python3[87547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:42 np0005603541 podman[87548]: 2026-01-31 06:49:42.8830512 +0000 UTC m=+0.039937423 container create 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 01:49:42 np0005603541 systemd[1]: Started libpod-conmon-0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a.scope.
Jan 31 01:49:42 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287d80ff9cdac22a5fcd84328c0502793ea3ecc614bee9c931ad793aa2777229/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287d80ff9cdac22a5fcd84328c0502793ea3ecc614bee9c931ad793aa2777229/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:42 np0005603541 podman[87548]: 2026-01-31 06:49:42.946227547 +0000 UTC m=+0.103113760 container init 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:49:42 np0005603541 podman[87548]: 2026-01-31 06:49:42.950159751 +0000 UTC m=+0.107045934 container start 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:42 np0005603541 podman[87548]: 2026-01-31 06:49:42.953760068 +0000 UTC m=+0.110646341 container attach 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:42 np0005603541 podman[87548]: 2026-01-31 06:49:42.863463383 +0000 UTC m=+0.020349586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:43 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v69: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 31 01:49:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:43 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 31 01:49:44 np0005603541 affectionate_shaw[87563]: pool 'cephfs.cephfs.data' created
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 31 01:49:44 np0005603541 systemd[1]: libpod-0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a.scope: Deactivated successfully.
Jan 31 01:49:44 np0005603541 podman[87548]: 2026-01-31 06:49:44.500071901 +0000 UTC m=+1.656958124 container died 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 01:49:44 np0005603541 systemd[1]: var-lib-containers-storage-overlay-287d80ff9cdac22a5fcd84328c0502793ea3ecc614bee9c931ad793aa2777229-merged.mount: Deactivated successfully.
Jan 31 01:49:44 np0005603541 podman[87548]: 2026-01-31 06:49:44.546899539 +0000 UTC m=+1.703785832 container remove 0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a (image=quay.io/ceph/ceph:v18, name=affectionate_shaw, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:49:44 np0005603541 systemd[1]: libpod-conmon-0e22b73627772d690a0dc0a22dce390563bcbbaeb34c9d11db7347e9735f3b7a.scope: Deactivated successfully.
Jan 31 01:49:44 np0005603541 python3[87626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:49:44 np0005603541 podman[87627]: 2026-01-31 06:49:44.985991655 +0000 UTC m=+0.041427319 container create 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:49:45 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:49:45 np0005603541 systemd[1]: Started libpod-conmon-39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a.scope.
Jan 31 01:49:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82cbf72c4e1dcbf8774f72c1b8cdaa1239c450a039d6d25fab261a82f9a838d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82cbf72c4e1dcbf8774f72c1b8cdaa1239c450a039d6d25fab261a82f9a838d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:45 np0005603541 podman[87627]: 2026-01-31 06:49:44.967043604 +0000 UTC m=+0.022479348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:45 np0005603541 podman[87627]: 2026-01-31 06:49:45.065079043 +0000 UTC m=+0.120514727 container init 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:49:45 np0005603541 podman[87627]: 2026-01-31 06:49:45.069828316 +0000 UTC m=+0.125263980 container start 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:45 np0005603541 podman[87627]: 2026-01-31 06:49:45.075547442 +0000 UTC m=+0.130983126 container attach 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:45 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:49:45 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 01:49:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 01:49:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 01:49:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 31 01:49:46 np0005603541 infallible_engelbart[87642]: enabled application 'rbd' on pool 'vms'
Jan 31 01:49:46 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 31 01:49:46 np0005603541 systemd[1]: libpod-39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a.scope: Deactivated successfully.
Jan 31 01:49:46 np0005603541 podman[87627]: 2026-01-31 06:49:46.021637486 +0000 UTC m=+1.077073200 container died 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:46 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a82cbf72c4e1dcbf8774f72c1b8cdaa1239c450a039d6d25fab261a82f9a838d-merged.mount: Deactivated successfully.
Jan 31 01:49:46 np0005603541 podman[87627]: 2026-01-31 06:49:46.055950634 +0000 UTC m=+1.111386308 container remove 39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a (image=quay.io/ceph/ceph:v18, name=infallible_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 01:49:46 np0005603541 systemd[1]: libpod-conmon-39e4f4b4a5b052fe146175e306642c5d50e26a561fd7c8985ba930e80462ed7a.scope: Deactivated successfully.
Jan 31 01:49:46 np0005603541 python3[87703]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:46 np0005603541 podman[87704]: 2026-01-31 06:49:46.366071534 +0000 UTC m=+0.037777362 container create 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:46 np0005603541 systemd[1]: Started libpod-conmon-39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52.scope.
Jan 31 01:49:46 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:46 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcd9675f36bf5e75ce0e395f555313bf933137b58bb25d674868638a47e45205/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:46 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcd9675f36bf5e75ce0e395f555313bf933137b58bb25d674868638a47e45205/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:46 np0005603541 podman[87704]: 2026-01-31 06:49:46.347355907 +0000 UTC m=+0.019061785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:46 np0005603541 podman[87704]: 2026-01-31 06:49:46.445768815 +0000 UTC m=+0.117474713 container init 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:46 np0005603541 podman[87704]: 2026-01-31 06:49:46.450981269 +0000 UTC m=+0.122687097 container start 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 01:49:46 np0005603541 podman[87704]: 2026-01-31 06:49:46.454005942 +0000 UTC m=+0.125711870 container attach 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:46 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:49:46 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:49:47 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 01:49:47 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:47 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:49:47 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 31 01:49:48 np0005603541 pensive_euler[87717]: enabled application 'rbd' on pool 'volumes'
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 01:49:48 np0005603541 systemd[1]: libpod-39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52.scope: Deactivated successfully.
Jan 31 01:49:48 np0005603541 podman[87704]: 2026-01-31 06:49:48.038055197 +0000 UTC m=+1.709761055 container died 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fcd9675f36bf5e75ce0e395f555313bf933137b58bb25d674868638a47e45205-merged.mount: Deactivated successfully.
Jan 31 01:49:48 np0005603541 podman[87704]: 2026-01-31 06:49:48.073297487 +0000 UTC m=+1.745003315 container remove 39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52 (image=quay.io/ceph/ceph:v18, name=pensive_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:48 np0005603541 systemd[1]: libpod-conmon-39802ddc9ef3a8b52d929bdc89724b381f2d48522049f7267d3570257615fa52.scope: Deactivated successfully.
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:49:48
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Some PGs (0.142857) are inactive; try again later
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 9aecfb00-6572-4889-b70f-b5db1eaf2bba (Updating mon deployment (+2 -> 3))
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 31 01:49:48 np0005603541 python3[87781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:48 np0005603541 podman[87782]: 2026-01-31 06:49:48.378485839 +0000 UTC m=+0.040984989 container create ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:48 np0005603541 systemd[1]: Started libpod-conmon-ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb.scope.
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:48 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:49:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26938a7d5e7c87c245e7bbedbb0ed1475ae1075b7bd0aff33bd2c7083ba0b74f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26938a7d5e7c87c245e7bbedbb0ed1475ae1075b7bd0aff33bd2c7083ba0b74f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:49:48 np0005603541 podman[87782]: 2026-01-31 06:49:48.442799324 +0000 UTC m=+0.105298064 container init ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:49:48 np0005603541 podman[87782]: 2026-01-31 06:49:48.448776186 +0000 UTC m=+0.111274916 container start ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:49:48 np0005603541 podman[87782]: 2026-01-31 06:49:48.452168338 +0000 UTC m=+0.114667098 container attach ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 01:49:48 np0005603541 ceph-mgr[74648]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 31 01:49:48 np0005603541 podman[87782]: 2026-01-31 06:49:48.363123573 +0000 UTC m=+0.025622323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 31 01:49:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: Deploying daemon mon.compute-2 on compute-2
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 31 01:49:49 np0005603541 unruffled_euclid[87798]: enabled application 'rbd' on pool 'backups'
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 31 01:49:49 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev f3703a1a-9e16-4a38-b075-2f0dce2bd512 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:49 np0005603541 systemd[1]: libpod-ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb.scope: Deactivated successfully.
Jan 31 01:49:49 np0005603541 podman[87782]: 2026-01-31 06:49:49.271023265 +0000 UTC m=+0.933522015 container died ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:49:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-26938a7d5e7c87c245e7bbedbb0ed1475ae1075b7bd0aff33bd2c7083ba0b74f-merged.mount: Deactivated successfully.
Jan 31 01:49:49 np0005603541 podman[87782]: 2026-01-31 06:49:49.302207099 +0000 UTC m=+0.964705809 container remove ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb (image=quay.io/ceph/ceph:v18, name=unruffled_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:49:49 np0005603541 systemd[1]: libpod-conmon-ea4b96ea384e91093bc2a0dda6e9a46a1f71619b42a8cdd26487a85fd9c09ceb.scope: Deactivated successfully.
Jan 31 01:49:49 np0005603541 python3[87860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:49 np0005603541 podman[87861]: 2026-01-31 06:49:49.614287065 +0000 UTC m=+0.042553197 container create 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:49:49 np0005603541 systemd[1]: Started libpod-conmon-17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5.scope.
Jan 31 01:49:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6c22f51839039b133c64be8acb1b1b1de316a33ed1ced4eea2e40be927eeb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6c22f51839039b133c64be8acb1b1b1de316a33ed1ced4eea2e40be927eeb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:49 np0005603541 podman[87861]: 2026-01-31 06:49:49.67986487 +0000 UTC m=+0.108131022 container init 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:49:49 np0005603541 podman[87861]: 2026-01-31 06:49:49.685914604 +0000 UTC m=+0.114180736 container start 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:49:49 np0005603541 podman[87861]: 2026-01-31 06:49:49.594275687 +0000 UTC m=+0.022541839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:49 np0005603541 podman[87861]: 2026-01-31 06:49:49.690562995 +0000 UTC m=+0.118829127 container attach 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3542942284' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3542942284' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 31 01:49:50 np0005603541 dazzling_yalow[87876]: enabled application 'rbd' on pool 'images'
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 9d4334e5-c9ce-4f8c-b70b-0f73288a3e49 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:50 np0005603541 systemd[1]: libpod-17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5.scope: Deactivated successfully.
Jan 31 01:49:50 np0005603541 podman[87861]: 2026-01-31 06:49:50.284903926 +0000 UTC m=+0.713170058 container died 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:49:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8d6c22f51839039b133c64be8acb1b1b1de316a33ed1ced4eea2e40be927eeb3-merged.mount: Deactivated successfully.
Jan 31 01:49:50 np0005603541 podman[87861]: 2026-01-31 06:49:50.322230266 +0000 UTC m=+0.750496398 container remove 17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5 (image=quay.io/ceph/ceph:v18, name=dazzling_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:49:50 np0005603541 systemd[1]: libpod-conmon-17e878dea0f5572f121fa7afa57cab52be12dfd7cb998b10b7e7f99d1e8023f5.scope: Deactivated successfully.
Jan 31 01:49:50 np0005603541 python3[87937]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:49:50 np0005603541 podman[87938]: 2026-01-31 06:49:50.621544528 +0000 UTC m=+0.044467502 container create 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 01:49:50 np0005603541 systemd[1]: Started libpod-conmon-774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7.scope.
Jan 31 01:49:50 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:49:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e3edd85d7bfe7291fb6d97d6d46d5fbb27028b607a2cedd2ce56de2ba1a937/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e3edd85d7bfe7291fb6d97d6d46d5fbb27028b607a2cedd2ce56de2ba1a937/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:49:50 np0005603541 podman[87938]: 2026-01-31 06:49:50.6907518 +0000 UTC m=+0.113674774 container init 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 01:49:50 np0005603541 podman[87938]: 2026-01-31 06:49:50.599306787 +0000 UTC m=+0.022229781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:49:50 np0005603541 podman[87938]: 2026-01-31 06:49:50.695247347 +0000 UTC m=+0.118170321 container start 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 01:49:50 np0005603541 podman[87938]: 2026-01-31 06:49:50.699029306 +0000 UTC m=+0.121952320 container attach 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:50 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 31 01:49:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:49:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:51 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:51 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:49:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:52 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:52 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 01:49:53 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:53 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 01:49:53 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:53 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:54 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:54 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 01:49:54 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:54 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 01:49:55 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:55 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 01:49:55 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:55 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:49:55 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gghdjs(active, since 2m)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 9aecfb00-6572-4889-b70f-b5db1eaf2bba (Updating mon deployment (+2 -> 3))
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 9aecfb00-6572-4889-b70f-b5db1eaf2bba (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: Deploying daemon mon.compute-1 on compute-1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0 calling monitor election
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-2 calling monitor election
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 31 01:49:56 np0005603541 ceph-mon[74355]:    application not enabled on pool 'backups'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]:    application not enabled on pool 'images'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]:    application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]:    application not enabled on pool 'cephfs.cephfs.data'
Jan 31 01:49:56 np0005603541 ceph-mon[74355]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=14.291584969s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active pruub 62.744907379s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:49:56 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=14.291584969s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown pruub 62.744907379s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev e69eda70-e913-43c0-91b4-4e01dd6bd7be (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 6a28ee21-bff6-4f3e-b5df-2b23d83482fb (Updating mgr deployment (+2 -> 3))
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:56 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2552847974; not ready for session (expect reconnect)
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: Deploying daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev ba26b4d2-587b-41e9-807a-0d8170884882 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=14.274881363s) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active pruub 63.757472992s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=14.274881363s) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown pruub 63.757472992s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:49:57 np0005603541 ceph-mgr[74648]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 01:49:57 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:49:57.966+0000 7f6ece6f5640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 01:49:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:49:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 01:49:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v86: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:49:58 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 3 completed events
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:49:58 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:58 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:49:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:49:59 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:49:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:49:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:49:59 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 01:50:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 01:50:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v87: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:00 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:00 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:01 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:01 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:02 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 01:50:02 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:02 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:02 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 31 01:50:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:50:03 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:03 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gghdjs(active, since 2m)
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 31 01:50:04 np0005603541 vibrant_shaw[87953]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0 calling monitor election
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-2 calling monitor election
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-1 calling monitor election
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Jan 31 01:50:04 np0005603541 ceph-mon[74355]:    application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 01:50:04 np0005603541 ceph-mon[74355]:    application not enabled on pool 'cephfs.cephfs.data'
Jan 31 01:50:04 np0005603541 ceph-mon[74355]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.16( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.17( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29 pruub=8.524605751s) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active pruub 65.764228821s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.6( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.3( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.19( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29 pruub=8.524605751s) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown pruub 65.764228821s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 193f98a9-6ad1-4fec-9f2b-0d3bf1072437 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 31 01:50:04 np0005603541 systemd[1]: libpod-774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7.scope: Deactivated successfully.
Jan 31 01:50:04 np0005603541 podman[87938]: 2026-01-31 06:50:04.925356252 +0000 UTC m=+14.348279246 container died 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.16( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.17( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 29 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [0] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 01:50:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-69e3edd85d7bfe7291fb6d97d6d46d5fbb27028b607a2cedd2ce56de2ba1a937-merged.mount: Deactivated successfully.
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.hglnzn on compute-1
Jan 31 01:50:04 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.hglnzn on compute-1
Jan 31 01:50:04 np0005603541 systemd[75973]: Starting Mark boot as successful...
Jan 31 01:50:04 np0005603541 systemd[75973]: Finished Mark boot as successful.
Jan 31 01:50:04 np0005603541 podman[87938]: 2026-01-31 06:50:04.980739733 +0000 UTC m=+14.403662707 container remove 774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:04 np0005603541 systemd[1]: libpod-conmon-774332108e6b2d51bf02d4f4b62adb9d0e92a3588ba24dfa551f70ffa274f4a7.scope: Deactivated successfully.
Jan 31 01:50:05 np0005603541 python3[88017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:05 np0005603541 podman[88018]: 2026-01-31 06:50:05.306657128 +0000 UTC m=+0.045157058 container create 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 31 01:50:05 np0005603541 systemd[1]: Started libpod-conmon-07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069.scope.
Jan 31 01:50:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea239266b48ffa32109dd2489ada485fa36f591f0cd5bc64db83859b52b43f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea239266b48ffa32109dd2489ada485fa36f591f0cd5bc64db83859b52b43f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:05 np0005603541 podman[88018]: 2026-01-31 06:50:05.371155488 +0000 UTC m=+0.109655438 container init 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 01:50:05 np0005603541 podman[88018]: 2026-01-31 06:50:05.377488629 +0000 UTC m=+0.115988559 container start 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:50:05 np0005603541 podman[88018]: 2026-01-31 06:50:05.38128436 +0000 UTC m=+0.119784320 container attach 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:50:05 np0005603541 podman[88018]: 2026-01-31 06:50:05.288789913 +0000 UTC m=+0.027289873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4210371500; not ready for session (expect reconnect)
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: Deploying daemon mgr.compute-1.hglnzn on compute-1
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 31 01:50:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev d3db9a06-a52e-488b-b411-532ff0af98ac (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev f3703a1a-9e16-4a38-b075-2f0dce2bd512 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event f3703a1a-9e16-4a38-b075-2f0dce2bd512 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 17 seconds
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 9d4334e5-c9ce-4f8c-b70b-0f73288a3e49 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 9d4334e5-c9ce-4f8c-b70b-0f73288a3e49 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 16 seconds
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev e69eda70-e913-43c0-91b4-4e01dd6bd7be (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event e69eda70-e913-43c0-91b4-4e01dd6bd7be (PG autoscaler increasing pool 4 PGs from 1 to 32) in 10 seconds
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev ba26b4d2-587b-41e9-807a-0d8170884882 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event ba26b4d2-587b-41e9-807a-0d8170884882 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 9 seconds
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 193f98a9-6ad1-4fec-9f2b-0d3bf1072437 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 193f98a9-6ad1-4fec-9f2b-0d3bf1072437 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev d3db9a06-a52e-488b-b411-532ff0af98ac (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 01:50:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event d3db9a06-a52e-488b-b411-532ff0af98ac (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=17/18 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=29/30 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=17/17 les/c/f=18/18/0 sis=29) [0] r=0 lpr=29 pi=[17,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:06 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 01:50:06 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 01:50:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mgr[74648]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 01:50:06 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T06:50:06.725+0000 7f6ece6f5640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 6a28ee21-bff6-4f3e-b5df-2b23d83482fb (Updating mgr deployment (+2 -> 3))
Jan 31 01:50:06 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 6a28ee21-bff6-4f3e-b5df-2b23d83482fb (Updating mgr deployment (+2 -> 3)) in 11 seconds
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev a58ffd57-9427-443a-b885-5f85c5255b37 (Updating crash deployment (+1 -> 3))
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 31 01:50:07 np0005603541 heuristic_pasteur[88033]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 31 01:50:07 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.458669662s) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active pruub 67.804504395s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:07 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.458669662s) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown pruub 67.804504395s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:07 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 31 01:50:07 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 31 01:50:07 np0005603541 systemd[1]: libpod-07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069.scope: Deactivated successfully.
Jan 31 01:50:07 np0005603541 podman[88018]: 2026-01-31 06:50:07.041209595 +0000 UTC m=+1.779709535 container died 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6ea239266b48ffa32109dd2489ada485fa36f591f0cd5bc64db83859b52b43f4-merged.mount: Deactivated successfully.
Jan 31 01:50:07 np0005603541 podman[88018]: 2026-01-31 06:50:07.074805746 +0000 UTC m=+1.813305676 container remove 07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069 (image=quay.io/ceph/ceph:v18, name=heuristic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:07 np0005603541 systemd[1]: libpod-conmon-07b09b040d48b153d2471ca3e15fd9ca0002f8900053235658945aeb9b7e1069.scope: Deactivated successfully.
Jan 31 01:50:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 01:50:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 01:50:07 np0005603541 python3[88145]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: Deploying daemon crash.compute-2 on compute-2
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:08 np0005603541 python3[88216]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842207.7463539-37418-90172664470784/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=31/32 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [0] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:08 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev a58ffd57-9427-443a-b885-5f85c5255b37 (Updating crash deployment (+1 -> 3))
Jan 31 01:50:08 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event a58ffd57-9427-443a-b885-5f85c5255b37 (Updating crash deployment (+1 -> 3)) in 2 seconds
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:08 np0005603541 python3[88318]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:09 np0005603541 python3[88486]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842208.5537677-37432-97406441038235/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f51ba5d134c679721d328da4d12f5852eb7ceaa7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.287687525 +0000 UTC m=+0.038349736 container create f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:09 np0005603541 systemd[1]: Started libpod-conmon-f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21.scope.
Jan 31 01:50:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.271879818 +0000 UTC m=+0.022542049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.369409895 +0000 UTC m=+0.120072126 container init f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.374359684 +0000 UTC m=+0.125021935 container start f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:09 np0005603541 cranky_satoshi[88572]: 167 167
Jan 31 01:50:09 np0005603541 systemd[1]: libpod-f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21.scope: Deactivated successfully.
Jan 31 01:50:09 np0005603541 conmon[88572]: conmon f280d790ed8e59153697 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21.scope/container/memory.events
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.381153126 +0000 UTC m=+0.131815347 container attach f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.381696178 +0000 UTC m=+0.132358399 container died f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 01:50:09 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c18408b162656d2f7f43e8ac20d0547f317699b9bbc477e7264192fe1d3612af-merged.mount: Deactivated successfully.
Jan 31 01:50:09 np0005603541 podman[88555]: 2026-01-31 06:50:09.430736258 +0000 UTC m=+0.181398479 container remove f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:09 np0005603541 systemd[1]: libpod-conmon-f280d790ed8e591536978818a47a282ba30b8c2d7f62ae7dcde360d19cfeec21.scope: Deactivated successfully.
Jan 31 01:50:09 np0005603541 python3[88608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:09 np0005603541 podman[88621]: 2026-01-31 06:50:09.581442675 +0000 UTC m=+0.046693516 container create e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:09 np0005603541 systemd[1]: Started libpod-conmon-e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b.scope.
Jan 31 01:50:09 np0005603541 podman[88628]: 2026-01-31 06:50:09.613163461 +0000 UTC m=+0.060266539 container create 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:50:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 systemd[1]: Started libpod-conmon-653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a.scope.
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 podman[88621]: 2026-01-31 06:50:09.65211877 +0000 UTC m=+0.117369631 container init e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:50:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:09 np0005603541 podman[88621]: 2026-01-31 06:50:09.559778437 +0000 UTC m=+0.025029288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f3b8a463d6e9a00ae8e41d748f2030a520e64eb1d8833696a8fab761c3b7d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f3b8a463d6e9a00ae8e41d748f2030a520e64eb1d8833696a8fab761c3b7d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f3b8a463d6e9a00ae8e41d748f2030a520e64eb1d8833696a8fab761c3b7d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:09 np0005603541 podman[88621]: 2026-01-31 06:50:09.660295926 +0000 UTC m=+0.125546767 container start e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:09 np0005603541 podman[88621]: 2026-01-31 06:50:09.663707717 +0000 UTC m=+0.128958588 container attach e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:09 np0005603541 podman[88628]: 2026-01-31 06:50:09.571878476 +0000 UTC m=+0.018981554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:09 np0005603541 podman[88628]: 2026-01-31 06:50:09.677854594 +0000 UTC m=+0.124957692 container init 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:50:09 np0005603541 podman[88628]: 2026-01-31 06:50:09.681815519 +0000 UTC m=+0.128918597 container start 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 31 01:50:09 np0005603541 podman[88628]: 2026-01-31 06:50:09.692736279 +0000 UTC m=+0.139839357 container attach 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:09 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 11 completed events
Jan 31 01:50:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:10 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 01:50:10 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: Cluster is now healthy
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 25 activating, 31 unknown, 137 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:10 np0005603541 happy_austin[88651]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:50:10 np0005603541 happy_austin[88651]: --> relative data size: 1.0
Jan 31 01:50:10 np0005603541 happy_austin[88651]: --> All data devices are unavailable
Jan 31 01:50:10 np0005603541 systemd[1]: libpod-e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b.scope: Deactivated successfully.
Jan 31 01:50:10 np0005603541 podman[88621]: 2026-01-31 06:50:10.454101346 +0000 UTC m=+0.919352197 container died e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:10 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7a6a5c4b5b40ace42d542bcd5a379f365f2fa0c3f926e69608a4b687357bc975-merged.mount: Deactivated successfully.
Jan 31 01:50:10 np0005603541 podman[88621]: 2026-01-31 06:50:10.499536269 +0000 UTC m=+0.964787110 container remove e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:50:10 np0005603541 systemd[1]: libpod-conmon-e327251fbf2b0514f2973eecfe28d35b06bb4760da0a8a47a9a202fc6c44636b.scope: Deactivated successfully.
Jan 31 01:50:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 01:50:10 np0005603541 mystifying_einstein[88656]: 
Jan 31 01:50:10 np0005603541 mystifying_einstein[88656]: [global]
Jan 31 01:50:10 np0005603541 mystifying_einstein[88656]: #011fsid = ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 01:50:10 np0005603541 mystifying_einstein[88656]: #011mon_host = 192.168.122.100
Jan 31 01:50:10 np0005603541 systemd[1]: libpod-653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a.scope: Deactivated successfully.
Jan 31 01:50:10 np0005603541 podman[88628]: 2026-01-31 06:50:10.9182545 +0000 UTC m=+1.365357588 container died 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:10 np0005603541 systemd[1]: var-lib-containers-storage-overlay-85f3b8a463d6e9a00ae8e41d748f2030a520e64eb1d8833696a8fab761c3b7d7-merged.mount: Deactivated successfully.
Jan 31 01:50:10 np0005603541 podman[88628]: 2026-01-31 06:50:10.973977669 +0000 UTC m=+1.421080747 container remove 653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a (image=quay.io/ceph/ceph:v18, name=mystifying_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 01:50:10 np0005603541 systemd[1]: libpod-conmon-653c3c3ee3c8ffb4065ae9829ed200c091179d0c04ed13d382b82a78db7c756a.scope: Deactivated successfully.
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.086664888 +0000 UTC m=+0.036613395 container create df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:11 np0005603541 systemd[1]: Started libpod-conmon-df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac.scope.
Jan 31 01:50:11 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.142935621 +0000 UTC m=+0.092884128 container init df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.14919626 +0000 UTC m=+0.099144747 container start df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.152748505 +0000 UTC m=+0.102697012 container attach df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 01:50:11 np0005603541 stoic_margulis[88900]: 167 167
Jan 31 01:50:11 np0005603541 systemd[1]: libpod-df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac.scope: Deactivated successfully.
Jan 31 01:50:11 np0005603541 conmon[88900]: conmon df51cc314a838d430559 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac.scope/container/memory.events
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.155987902 +0000 UTC m=+0.105936389 container died df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.067369568 +0000 UTC m=+0.017318105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:11 np0005603541 systemd[1]: var-lib-containers-storage-overlay-3e7e028e7b8fac56a89847b849c7d0fb46bbb638637a0d343179617dfbf36399-merged.mount: Deactivated successfully.
Jan 31 01:50:11 np0005603541 podman[88857]: 2026-01-31 06:50:11.191342265 +0000 UTC m=+0.141290752 container remove df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:11 np0005603541 systemd[1]: libpod-conmon-df51cc314a838d43055955f2ae7c7c4f6aecb63526bd1f6912da3577c60f67ac.scope: Deactivated successfully.
Jan 31 01:50:11 np0005603541 python3[88899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:11 np0005603541 podman[88918]: 2026-01-31 06:50:11.30509539 +0000 UTC m=+0.034505675 container create 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:11 np0005603541 podman[88928]: 2026-01-31 06:50:11.330713191 +0000 UTC m=+0.045317582 container create c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:11 np0005603541 systemd[1]: Started libpod-conmon-3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93.scope.
Jan 31 01:50:11 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 01:50:11 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 01:50:11 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f82e8d52a7a7e796b3d8cbf9f595ecc4fa8c088c3f0bb1c31836cde91aa926/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f82e8d52a7a7e796b3d8cbf9f595ecc4fa8c088c3f0bb1c31836cde91aa926/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 systemd[1]: Started libpod-conmon-c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69.scope.
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f82e8d52a7a7e796b3d8cbf9f595ecc4fa8c088c3f0bb1c31836cde91aa926/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 podman[88918]: 2026-01-31 06:50:11.385871447 +0000 UTC m=+0.115281762 container init 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:11 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:11 np0005603541 podman[88918]: 2026-01-31 06:50:11.291125907 +0000 UTC m=+0.020536212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06afffd2906844146585c12b3eddd0630b8ff0e58d88fcbbfe1e402f1016a645/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06afffd2906844146585c12b3eddd0630b8ff0e58d88fcbbfe1e402f1016a645/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06afffd2906844146585c12b3eddd0630b8ff0e58d88fcbbfe1e402f1016a645/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06afffd2906844146585c12b3eddd0630b8ff0e58d88fcbbfe1e402f1016a645/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:11 np0005603541 podman[88918]: 2026-01-31 06:50:11.394225026 +0000 UTC m=+0.123635301 container start 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 01:50:11 np0005603541 podman[88918]: 2026-01-31 06:50:11.397445964 +0000 UTC m=+0.126856269 container attach 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 01:50:11 np0005603541 podman[88928]: 2026-01-31 06:50:11.311636446 +0000 UTC m=+0.026240857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:11 np0005603541 podman[88928]: 2026-01-31 06:50:11.410009713 +0000 UTC m=+0.124614124 container init c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:50:11 np0005603541 podman[88928]: 2026-01-31 06:50:11.415063923 +0000 UTC m=+0.129668314 container start c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:11 np0005603541 podman[88928]: 2026-01-31 06:50:11.418070765 +0000 UTC m=+0.132675216 container attach c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2722832315' entity='client.admin' 
Jan 31 01:50:12 np0005603541 musing_diffie[88949]: set ssl_option
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"} v 0) v1
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]: dispatch
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]': finished
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:12 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:12 np0005603541 podman[88987]: 2026-01-31 06:50:12.104164835 +0000 UTC m=+0.024076305 container died 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-11f82e8d52a7a7e796b3d8cbf9f595ecc4fa8c088c3f0bb1c31836cde91aa926-merged.mount: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[88987]: 2026-01-31 06:50:12.14039807 +0000 UTC m=+0.060309520 container remove 3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93 (image=quay.io/ceph/ceph:v18, name=musing_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-conmon-3f882d71260d604bd7ed36a41e13f1732b0b5aab1256b4c11e48e53809674c93.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]: {
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:    "0": [
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:        {
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "devices": [
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "/dev/loop3"
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            ],
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "lv_name": "ceph_lv0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "lv_size": "7511998464",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "name": "ceph_lv0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "tags": {
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.cluster_name": "ceph",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.crush_device_class": "",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.encrypted": "0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.osd_id": "0",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.type": "block",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:                "ceph.vdo": "0"
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            },
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "type": "block",
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:            "vg_name": "ceph_vg0"
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:        }
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]:    ]
Jan 31 01:50:12 np0005603541 distracted_fermi[88958]: }
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[89001]: 2026-01-31 06:50:12.206127828 +0000 UTC m=+0.022002586 container died c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:50:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-06afffd2906844146585c12b3eddd0630b8ff0e58d88fcbbfe1e402f1016a645-merged.mount: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[89001]: 2026-01-31 06:50:12.251929171 +0000 UTC m=+0.067803919 container remove c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-conmon-c0cbfa32c00ba46ce0ecc6d40d036dd4dfb2cb53e7a61cbac041eac4adbaac69.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 25 activating, 31 unknown, 137 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:12 np0005603541 python3[89039]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:12 np0005603541 podman[89114]: 2026-01-31 06:50:12.457695311 +0000 UTC m=+0.034362382 container create 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 01:50:12 np0005603541 systemd[1]: Started libpod-conmon-73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f.scope.
Jan 31 01:50:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731508ccd57b97e65d4ea521bc6faf073c9f65dc9b5f0b476df8666ed367b548/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731508ccd57b97e65d4ea521bc6faf073c9f65dc9b5f0b476df8666ed367b548/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731508ccd57b97e65d4ea521bc6faf073c9f65dc9b5f0b476df8666ed367b548/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 podman[89114]: 2026-01-31 06:50:12.540581368 +0000 UTC m=+0.117248449 container init 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 01:50:12 np0005603541 podman[89114]: 2026-01-31 06:50:12.444477795 +0000 UTC m=+0.021144896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:12 np0005603541 podman[89114]: 2026-01-31 06:50:12.547662187 +0000 UTC m=+0.124329258 container start 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:12 np0005603541 podman[89114]: 2026-01-31 06:50:12.551944389 +0000 UTC m=+0.128611490 container attach 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.708698509 +0000 UTC m=+0.037818043 container create 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:50:12 np0005603541 systemd[1]: Started libpod-conmon-8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92.scope.
Jan 31 01:50:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.76154405 +0000 UTC m=+0.090663604 container init 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.765803842 +0000 UTC m=+0.094923376 container start 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:50:12 np0005603541 festive_cerf[89218]: 167 167
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.770116575 +0000 UTC m=+0.099236119 container attach 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.770582585 +0000 UTC m=+0.099702129 container died 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.688796244 +0000 UTC m=+0.017915818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-808935e240bbfd05db9f20de87fd0d6748f2b0fde78a8086ed0b362a537606ba-merged.mount: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[89201]: 2026-01-31 06:50:12.810175591 +0000 UTC m=+0.139295125 container remove 8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 01:50:12 np0005603541 systemd[1]: libpod-conmon-8ea34e2d1ce93e5b64d2503d34782fdbfe2c2e7eb9de23cf327a9e4364c6df92.scope: Deactivated successfully.
Jan 31 01:50:12 np0005603541 podman[89246]: 2026-01-31 06:50:12.925547263 +0000 UTC m=+0.038492300 container create cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 01:50:12 np0005603541 systemd[1]: Started libpod-conmon-cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4.scope.
Jan 31 01:50:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4531802d67196725a435b3ee73e9e4a4c5b3662b3a18f5ebf2fa0c192cfbd76d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4531802d67196725a435b3ee73e9e4a4c5b3662b3a18f5ebf2fa0c192cfbd76d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4531802d67196725a435b3ee73e9e4a4c5b3662b3a18f5ebf2fa0c192cfbd76d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4531802d67196725a435b3ee73e9e4a4c5b3662b3a18f5ebf2fa0c192cfbd76d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:12 np0005603541 podman[89246]: 2026-01-31 06:50:12.995927182 +0000 UTC m=+0.108872229 container init cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 01:50:13 np0005603541 podman[89246]: 2026-01-31 06:50:13.000893461 +0000 UTC m=+0.113838498 container start cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:50:13 np0005603541 podman[89246]: 2026-01-31 06:50:12.908610759 +0000 UTC m=+0.021555816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:13 np0005603541 podman[89246]: 2026-01-31 06:50:13.008619405 +0000 UTC m=+0.121564462 container attach cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/4277762610' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]: dispatch
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/2722832315' entity='client.admin' 
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]: dispatch
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]': finished
Jan 31 01:50:13 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:50:13 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:13 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:13 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 31 01:50:13 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:13 np0005603541 beautiful_payne[89156]: Scheduled rgw.rgw update...
Jan 31 01:50:13 np0005603541 beautiful_payne[89156]: Scheduled ingress.rgw.default update...
Jan 31 01:50:13 np0005603541 systemd[1]: libpod-73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f.scope: Deactivated successfully.
Jan 31 01:50:13 np0005603541 podman[89114]: 2026-01-31 06:50:13.166719097 +0000 UTC m=+0.743386198 container died 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:13 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Jan 31 01:50:13 np0005603541 systemd[1]: var-lib-containers-storage-overlay-731508ccd57b97e65d4ea521bc6faf073c9f65dc9b5f0b476df8666ed367b548-merged.mount: Deactivated successfully.
Jan 31 01:50:13 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Jan 31 01:50:13 np0005603541 podman[89114]: 2026-01-31 06:50:13.225127791 +0000 UTC m=+0.801794862 container remove 73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:13 np0005603541 systemd[1]: libpod-conmon-73a41d7b2706b4a28c7b84d39fadfb82ccafb8e3723501978c8701d5b815ef4f.scope: Deactivated successfully.
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]: {
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:        "osd_id": 0,
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:        "type": "bluestore"
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]:    }
Jan 31 01:50:13 np0005603541 hungry_pascal[89279]: }
Jan 31 01:50:13 np0005603541 systemd[1]: libpod-cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4.scope: Deactivated successfully.
Jan 31 01:50:13 np0005603541 podman[89314]: 2026-01-31 06:50:13.809910154 +0000 UTC m=+0.020823528 container died cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 01:50:13 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4531802d67196725a435b3ee73e9e4a4c5b3662b3a18f5ebf2fa0c192cfbd76d-merged.mount: Deactivated successfully.
Jan 31 01:50:13 np0005603541 podman[89314]: 2026-01-31 06:50:13.861921185 +0000 UTC m=+0.072834539 container remove cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:13 np0005603541 systemd[1]: libpod-conmon-cb4054636e7ff1004fb30a2f6bd9b5e8e83355db59e8f564d4facc69fa31ddb4.scope: Deactivated successfully.
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:50:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:14 np0005603541 python3[89404]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:50:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 01:50:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 01:50:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:50:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:14 np0005603541 python3[89475]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842213.949915-37473-255503132705200/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:50:14 np0005603541 python3[89525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.042461782 +0000 UTC m=+0.049052141 container create ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 11bfdb0c-2339-413b-9b81-b6bfdd7c536a (Global Recovery Event) in 27 seconds
Jan 31 01:50:15 np0005603541 systemd[1]: Started libpod-conmon-ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239.scope.
Jan 31 01:50:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f89ba8c95b920e8a456ca52085747524d661f79b60e323a84e156b9e21349f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f89ba8c95b920e8a456ca52085747524d661f79b60e323a84e156b9e21349f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f89ba8c95b920e8a456ca52085747524d661f79b60e323a84e156b9e21349f2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.013224534 +0000 UTC m=+0.019814883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.118145448 +0000 UTC m=+0.124735797 container init ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.124454378 +0000 UTC m=+0.131044737 container start ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.128802722 +0000 UTC m=+0.135393061 container attach ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752930641s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260894775s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752873421s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260894775s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.024462700s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.532501221s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.783143997s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291343689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.18( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.783072472s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291343689s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.987250328s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495620728s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.987224579s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495620728s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782871246s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291336060s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.024331093s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.532501221s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752591133s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260879517s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782844543s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291336060s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752310753s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260879517s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752227783s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260910034s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986851692s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495597839s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.023751259s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.532508850s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986822128s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495597839s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.023723602s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.532508850s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.752174377s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260910034s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.142827034s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651771545s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.142805099s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651771545s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986555099s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495536804s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986533165s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495536804s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782248497s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291305542s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1c( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782224655s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291305542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.751759529s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260871887s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782530785s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291320801s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986420631s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495559692s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.751716614s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260871887s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782072067s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291259766s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782147408s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291320801s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.782049179s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291259766s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986308098s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495559692s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.023562431s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.532852173s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.023534775s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.532852173s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781820297s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291206360s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.2( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781798363s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291206360s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986197472s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495689392s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.986173630s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495689392s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.751087189s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260658264s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.751054764s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260658264s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781452179s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291130066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141776085s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651466370s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781432152s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291130066s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141728401s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651466370s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781330109s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291107178s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.7( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.781309128s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291107178s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984880447s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494804382s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.750486374s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260429382s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984854698s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494804382s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.750464439s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260429382s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141664505s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651664734s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141637802s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651664734s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141613960s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651771545s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.750143051s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260322571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141589165s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651771545s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.750118256s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260322571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.780692101s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291015625s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984416008s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494750977s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984390259s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494750977s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.750011444s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260421753s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749987602s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260421753s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.780659676s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291015625s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141303062s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651847839s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141281128s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651847839s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984200478s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494812012s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.984177589s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494812012s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749588013s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260246277s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749562263s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260246277s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.780466080s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291236877s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141034126s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651824951s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.780439377s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291236877s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141194344s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.652000427s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141011238s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651824951s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749506950s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260337830s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.141159058s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.652000427s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749477386s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260337830s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983811378s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494743347s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983424187s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494415283s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983481407s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494499207s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749298096s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.260322571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.780257225s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.291297913s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983461380s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494499207s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.749274254s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.260322571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140684128s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651855469s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983257294s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494453430s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140659332s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651855469s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983236313s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494453430s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.779331207s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.290657043s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140576363s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651916504s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.16( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.779309273s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.290657043s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140554428s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651916504s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983185768s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494682312s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982905388s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494415283s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983149529s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494682312s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.748069763s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.259666443s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.748045921s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.259666443s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140269279s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651931763s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.140248299s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651931763s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982743263s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494430542s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.778751373s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.290519714s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982695580s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494430542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.15( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.778728485s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.290519714s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982275963s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494186401s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982253075s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494186401s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983648300s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.495620728s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983627319s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.495620728s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.747719765s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.259780884s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.982020378s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 81.494132996s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.772199631s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.284317017s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.981997490s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494132996s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.10( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.772178650s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.284317017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.747587204s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.259780884s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=13.983786583s) [1] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.494743347s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.139664650s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.651977539s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.139647484s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.651977539s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.730018616s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 81.242355347s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=13.729997635s) [1] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.242355347s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.771852493s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.284309387s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.139517784s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 76.652008057s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.1f( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.771819115s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.284309387s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.9( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.778812408s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.291297913s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35 pruub=9.139492989s) [1] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.652008057s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.778001785s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 82.290649414s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[5.11( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.777976990s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.290649414s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.19( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.1d( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.14( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.a( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.e( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.1( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.4( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.6( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.9( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=0/0 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.1e( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 35 pg[2.1f( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 01:50:15 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0[74351]: 2026-01-31T06:50:15.676+0000 7f6aa1646640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e2 new map
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:50:15.676874+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.1e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.2( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.8( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.a( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.b( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.14( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.10( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35) [0] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 36 pg[7.1d( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:50:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:15 np0005603541 ceph-mgr[74648]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 01:50:15 np0005603541 systemd[1]: libpod-ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239.scope: Deactivated successfully.
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.739768249 +0000 UTC m=+0.746358588 container died ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:15 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5f89ba8c95b920e8a456ca52085747524d661f79b60e323a84e156b9e21349f2-merged.mount: Deactivated successfully.
Jan 31 01:50:15 np0005603541 podman[89526]: 2026-01-31 06:50:15.793084831 +0000 UTC m=+0.799675150 container remove ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239 (image=quay.io/ceph/ceph:v18, name=upbeat_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:15 np0005603541 systemd[1]: libpod-conmon-ecf1db50a04559e1e0797715d57d3d3f052131a4bbf45461ee835097e035e239.scope: Deactivated successfully.
Jan 31 01:50:16 np0005603541 python3[89604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.133453512 +0000 UTC m=+0.044353039 container create abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:16 np0005603541 systemd[1]: Started libpod-conmon-abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014.scope.
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:16 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8175f4d83bc0d2b736c82eede3f9cdb4e110b7432a99b34d34f122cecdf27954/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8175f4d83bc0d2b736c82eede3f9cdb4e110b7432a99b34d34f122cecdf27954/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8175f4d83bc0d2b736c82eede3f9cdb4e110b7432a99b34d34f122cecdf27954/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.115717229 +0000 UTC m=+0.026616776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.219007333 +0000 UTC m=+0.129906890 container init abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.226013801 +0000 UTC m=+0.136913338 container start abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.229380172 +0000 UTC m=+0.140279719 container attach abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:50:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:16 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 01:50:16 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:16 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:50:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:16 np0005603541 heuristic_chandrasekhar[89620]: Scheduled mds.cephfs update...
Jan 31 01:50:16 np0005603541 systemd[1]: libpod-abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014.scope: Deactivated successfully.
Jan 31 01:50:16 np0005603541 conmon[89620]: conmon abbd61206c211c76ed63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014.scope/container/memory.events
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.800155569 +0000 UTC m=+0.711055136 container died abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8175f4d83bc0d2b736c82eede3f9cdb4e110b7432a99b34d34f122cecdf27954-merged.mount: Deactivated successfully.
Jan 31 01:50:16 np0005603541 podman[89605]: 2026-01-31 06:50:16.838154006 +0000 UTC m=+0.749053533 container remove abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014 (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:16 np0005603541 systemd[1]: libpod-conmon-abbd61206c211c76ed63bda6b88d2024236addedf8a51779c9ac4d5c2e81f014.scope: Deactivated successfully.
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:17 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 31 01:50:17 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 31 01:50:17 np0005603541 python3[89736]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:50:18 np0005603541 python3[89809]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842217.6121316-37521-162439918032901/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=6179fb8736d86099e122798f305813e20025174a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:50:18 np0005603541 ceph-mon[74355]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 01:50:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:18 np0005603541 python3[89859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:18 np0005603541 podman[89860]: 2026-01-31 06:50:18.736113476 +0000 UTC m=+0.054719933 container create 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:18 np0005603541 systemd[1]: Started libpod-conmon-2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84.scope.
Jan 31 01:50:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406f397d758c37ee2ba0c9969513b0402a01e9e4991f2a37f33d535de2a84b3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406f397d758c37ee2ba0c9969513b0402a01e9e4991f2a37f33d535de2a84b3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:18 np0005603541 podman[89860]: 2026-01-31 06:50:18.712565174 +0000 UTC m=+0.031171711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:18 np0005603541 podman[89860]: 2026-01-31 06:50:18.807545192 +0000 UTC m=+0.126151679 container init 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 01:50:18 np0005603541 podman[89860]: 2026-01-31 06:50:18.813717244 +0000 UTC m=+0.132323701 container start 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:50:18 np0005603541 podman[89860]: 2026-01-31 06:50:18.818742209 +0000 UTC m=+0.137348666 container attach 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:50:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:19 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 01:50:19 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 01:50:19 np0005603541 ceph-mon[74355]: Deploying daemon osd.2 on compute-2
Jan 31 01:50:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 31 01:50:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 01:50:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 01:50:19 np0005603541 systemd[1]: libpod-2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84.scope: Deactivated successfully.
Jan 31 01:50:19 np0005603541 podman[89900]: 2026-01-31 06:50:19.479082151 +0000 UTC m=+0.024335263 container died 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-406f397d758c37ee2ba0c9969513b0402a01e9e4991f2a37f33d535de2a84b3d-merged.mount: Deactivated successfully.
Jan 31 01:50:19 np0005603541 podman[89900]: 2026-01-31 06:50:19.513229205 +0000 UTC m=+0.058482317 container remove 2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84 (image=quay.io/ceph/ceph:v18, name=exciting_cerf, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 01:50:19 np0005603541 systemd[1]: libpod-conmon-2d188d95c9df3f02e1bfde35834cdd3077680fe1c6c05e2b5e28d8d078180e84.scope: Deactivated successfully.
Jan 31 01:50:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Jan 31 01:50:20 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 12 completed events
Jan 31 01:50:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Jan 31 01:50:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:50:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:20 np0005603541 python3[89940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:20 np0005603541 podman[89942]: 2026-01-31 06:50:20.363373608 +0000 UTC m=+0.074855911 container create 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:20 np0005603541 podman[89942]: 2026-01-31 06:50:20.311926666 +0000 UTC m=+0.023408989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:20 np0005603541 systemd[1]: Started libpod-conmon-74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d.scope.
Jan 31 01:50:20 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98170687e2d2b63239c962fd8b89e45ac966ddf4a17dda87c2f752f9945d573a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98170687e2d2b63239c962fd8b89e45ac966ddf4a17dda87c2f752f9945d573a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:20 np0005603541 podman[89942]: 2026-01-31 06:50:20.511793517 +0000 UTC m=+0.223275840 container init 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 01:50:20 np0005603541 podman[89942]: 2026-01-31 06:50:20.516626467 +0000 UTC m=+0.228108780 container start 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:20 np0005603541 podman[89942]: 2026-01-31 06:50:20.545825547 +0000 UTC m=+0.257307880 container attach 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:50:20 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 01:50:20 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 01:50:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 01:50:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782995466' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 01:50:21 np0005603541 suspicious_hofstadter[89959]: 
Jan 31 01:50:21 np0005603541 suspicious_hofstadter[89959]: {"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":17,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":2,"osd_up_since":1769842176,"num_in_osds":3,"osd_in_since":1769842212,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56135680,"bytes_avail":14967861248,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T06:50:14.257446+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 01:50:21 np0005603541 systemd[1]: libpod-74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d.scope: Deactivated successfully.
Jan 31 01:50:21 np0005603541 podman[89984]: 2026-01-31 06:50:21.166290064 +0000 UTC m=+0.032359200 container died 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:21 np0005603541 systemd[1]: var-lib-containers-storage-overlay-98170687e2d2b63239c962fd8b89e45ac966ddf4a17dda87c2f752f9945d573a-merged.mount: Deactivated successfully.
Jan 31 01:50:21 np0005603541 podman[89984]: 2026-01-31 06:50:21.201951095 +0000 UTC m=+0.068020201 container remove 74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d (image=quay.io/ceph/ceph:v18, name=suspicious_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 01:50:21 np0005603541 systemd[1]: libpod-conmon-74e4d38eaff6805132537b10d3491a557eca90ece97aa020b0b0866cad5a478d.scope: Deactivated successfully.
Jan 31 01:50:21 np0005603541 python3[90024]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:21 np0005603541 podman[90025]: 2026-01-31 06:50:21.566340462 +0000 UTC m=+0.047182837 container create ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:21 np0005603541 systemd[1]: Started libpod-conmon-ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d.scope.
Jan 31 01:50:21 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a2610ced27e065ded0b5ea23f13295f99dfecfefc9be079d0508e874a1120/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a2610ced27e065ded0b5ea23f13295f99dfecfefc9be079d0508e874a1120/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:21 np0005603541 podman[90025]: 2026-01-31 06:50:21.634283411 +0000 UTC m=+0.115125776 container init ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:21 np0005603541 podman[90025]: 2026-01-31 06:50:21.639756927 +0000 UTC m=+0.120599302 container start ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:21 np0005603541 podman[90025]: 2026-01-31 06:50:21.643121139 +0000 UTC m=+0.123963534 container attach ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:21 np0005603541 podman[90025]: 2026-01-31 06:50:21.549847365 +0000 UTC m=+0.030689770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 01:50:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 01:50:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 01:50:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392112215' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 01:50:22 np0005603541 keen_chebyshev[90039]: 
Jan 31 01:50:22 np0005603541 keen_chebyshev[90039]: {"epoch":3,"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","modified":"2026-01-31T06:49:57.718936Z","created":"2026-01-31T06:46:54.230301Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 31 01:50:22 np0005603541 keen_chebyshev[90039]: dumped monmap epoch 3
Jan 31 01:50:22 np0005603541 systemd[1]: libpod-ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d.scope: Deactivated successfully.
Jan 31 01:50:22 np0005603541 podman[90025]: 2026-01-31 06:50:22.216952133 +0000 UTC m=+0.697794508 container died ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:50:22 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5c5a2610ced27e065ded0b5ea23f13295f99dfecfefc9be079d0508e874a1120-merged.mount: Deactivated successfully.
Jan 31 01:50:22 np0005603541 podman[90025]: 2026-01-31 06:50:22.253325122 +0000 UTC m=+0.734167487 container remove ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d (image=quay.io/ceph/ceph:v18, name=keen_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:50:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:22 np0005603541 systemd[1]: libpod-conmon-ff7cf719ab1aad6dc8a16db4dd3b4d14b02a54fd4d463d7ada4adcbca14b7b9d.scope: Deactivated successfully.
Jan 31 01:50:22 np0005603541 python3[90101]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:22 np0005603541 podman[90102]: 2026-01-31 06:50:22.955660192 +0000 UTC m=+0.050789467 container create 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 01:50:22 np0005603541 systemd[1]: Started libpod-conmon-8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e.scope.
Jan 31 01:50:23 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820628b655ee486411789c8473a61ea7061bfaf9206d749dbd063342eb64c3cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820628b655ee486411789c8473a61ea7061bfaf9206d749dbd063342eb64c3cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:23.023435847 +0000 UTC m=+0.118565142 container init 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:22.929498805 +0000 UTC m=+0.024628120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:23.02883987 +0000 UTC m=+0.123969145 container start 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:23.032133201 +0000 UTC m=+0.127262546 container attach 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 31 01:50:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239664565' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 01:50:23 np0005603541 affectionate_swartz[90117]: [client.openstack]
Jan 31 01:50:23 np0005603541 affectionate_swartz[90117]: #011key = AQAnpX1pAAAAABAAzTaottZ9ZAhzIerr7s6NMg==
Jan 31 01:50:23 np0005603541 affectionate_swartz[90117]: #011caps mgr = "allow *"
Jan 31 01:50:23 np0005603541 affectionate_swartz[90117]: #011caps mon = "profile rbd"
Jan 31 01:50:23 np0005603541 affectionate_swartz[90117]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 01:50:23 np0005603541 systemd[1]: libpod-8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e.scope: Deactivated successfully.
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:23.677337219 +0000 UTC m=+0.772466524 container died 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 01:50:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-820628b655ee486411789c8473a61ea7061bfaf9206d749dbd063342eb64c3cb-merged.mount: Deactivated successfully.
Jan 31 01:50:23 np0005603541 podman[90102]: 2026-01-31 06:50:23.720283651 +0000 UTC m=+0.815412936 container remove 8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e (image=quay.io/ceph/ceph:v18, name=affectionate_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 01:50:23 np0005603541 systemd[1]: libpod-conmon-8fe7c0ce1825149db066d735a43285f16cf2300a02cc93eb5dfe41a2bfa8db1e.scope: Deactivated successfully.
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:24 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 01:50:24 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3239664565' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:24 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:24 np0005603541 ansible-async_wrapper.py[90303]: Invoked with j372759777453 30 /home/zuul/.ansible/tmp/ansible-tmp-1769842224.582722-37593-48434796601258/AnsiballZ_command.py _
Jan 31 01:50:24 np0005603541 ansible-async_wrapper.py[90306]: Starting module and watcher
Jan 31 01:50:24 np0005603541 ansible-async_wrapper.py[90306]: Start watching 90307 (30)
Jan 31 01:50:24 np0005603541 ansible-async_wrapper.py[90307]: Start module (90307)
Jan 31 01:50:24 np0005603541 ansible-async_wrapper.py[90303]: Return async_wrapper task started.
Jan 31 01:50:25 np0005603541 python3[90308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.166188238 +0000 UTC m=+0.024788943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.279699523 +0000 UTC m=+0.138300208 container create f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:25 np0005603541 systemd[1]: Started libpod-conmon-f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9.scope.
Jan 31 01:50:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2629878b5b024954a07571d8c8515ba3f7abc560aa1e3c0b7f8a1eeaa2a83/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2629878b5b024954a07571d8c8515ba3f7abc560aa1e3c0b7f8a1eeaa2a83/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 31 01:50:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.366965931 +0000 UTC m=+0.225566646 container init f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.371073802 +0000 UTC m=+0.229674487 container start f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.374435675 +0000 UTC m=+0.233036360 container attach f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:50:25 np0005603541 ceph-mon[74355]: from='osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 01:50:25 np0005603541 ceph-mon[74355]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 01:50:25 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:50:25 np0005603541 agitated_heisenberg[90324]: 
Jan 31 01:50:25 np0005603541 agitated_heisenberg[90324]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 01:50:25 np0005603541 systemd[1]: libpod-f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9.scope: Deactivated successfully.
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.913009057 +0000 UTC m=+0.771609742 container died f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6ed2629878b5b024954a07571d8c8515ba3f7abc560aa1e3c0b7f8a1eeaa2a83-merged.mount: Deactivated successfully.
Jan 31 01:50:25 np0005603541 podman[90309]: 2026-01-31 06:50:25.949715584 +0000 UTC m=+0.808316269 container remove f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9 (image=quay.io/ceph/ceph:v18, name=agitated_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:25 np0005603541 systemd[1]: libpod-conmon-f3036bf1c951ffe45cb1efdfcdcdd4d5d7b873feb15be0db07de2ca8cdb508e9.scope: Deactivated successfully.
Jan 31 01:50:25 np0005603541 ansible-async_wrapper.py[90307]: Module complete (90307)
Jan 31 01:50:25 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iujpur started
Jan 31 01:50:25 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mgr.compute-2.iujpur 192.168.122.102:0/1140847569; not ready for session (expect reconnect)
Jan 31 01:50:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:26 np0005603541 python3[90411]: ansible-ansible.legacy.async_status Invoked with jid=j372759777453.90303 mode=status _async_dir=/root/.ansible_async
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:26 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:26 np0005603541 python3[90460]: ansible-ansible.legacy.async_status Invoked with jid=j372759777453.90303 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e37 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: from='osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 01:50:26 np0005603541 ceph-mon[74355]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 01:50:26 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mgr.compute-2.iujpur 192.168.122.102:0/1140847569; not ready for session (expect reconnect)
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.gghdjs(active, since 2m), standbys: compute-2.iujpur
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.iujpur", "id": "compute-2.iujpur"} v 0) v1
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iujpur", "id": "compute-2.iujpur"}]: dispatch
Jan 31 01:50:27 np0005603541 python3[90486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.163797793 +0000 UTC m=+0.042142123 container create 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:27 np0005603541 systemd[1]: Started libpod-conmon-240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819.scope.
Jan 31 01:50:27 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5737452768b99e315c9754aec116da4ae5fdfd324713e4bdf9d7e85e63802671/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5737452768b99e315c9754aec116da4ae5fdfd324713e4bdf9d7e85e63802671/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.145526241 +0000 UTC m=+0.023870621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.244861416 +0000 UTC m=+0.123205746 container init 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.249757088 +0000 UTC m=+0.128101418 container start 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.262247526 +0000 UTC m=+0.140591856 container attach 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:27 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:27 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:27 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:27 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:50:27 np0005603541 blissful_hoover[90503]: 
Jan 31 01:50:27 np0005603541 blissful_hoover[90503]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 01:50:27 np0005603541 systemd[1]: libpod-240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819.scope: Deactivated successfully.
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.799116916 +0000 UTC m=+0.677461266 container died 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:50:27 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5737452768b99e315c9754aec116da4ae5fdfd324713e4bdf9d7e85e63802671-merged.mount: Deactivated successfully.
Jan 31 01:50:27 np0005603541 podman[90487]: 2026-01-31 06:50:27.858172395 +0000 UTC m=+0.736516735 container remove 240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819 (image=quay.io/ceph/ceph:v18, name=blissful_hoover, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 01:50:27 np0005603541 systemd[1]: libpod-conmon-240f196338ed9dc3d8ca783813f05c8d106f3acb75996735db9caf9f088b4819.scope: Deactivated successfully.
Jan 31 01:50:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.hglnzn started
Jan 31 01:50:28 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mgr.compute-1.hglnzn 192.168.122.101:0/148713632; not ready for session (expect reconnect)
Jan 31 01:50:28 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:28 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:28 np0005603541 python3[90746]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:28 np0005603541 podman[90747]: 2026-01-31 06:50:28.709749464 +0000 UTC m=+0.045951117 container create 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:50:28 np0005603541 systemd[1]: Started libpod-conmon-15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3.scope.
Jan 31 01:50:28 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f301aeeadce1fb3c7fc0b942cd19eba7b438ef084a6a4895318c5b9bd80550b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f301aeeadce1fb3c7fc0b942cd19eba7b438ef084a6a4895318c5b9bd80550b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:28 np0005603541 podman[90747]: 2026-01-31 06:50:28.685361241 +0000 UTC m=+0.021562954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:28 np0005603541 podman[90747]: 2026-01-31 06:50:28.805975952 +0000 UTC m=+0.142177595 container init 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:50:28 np0005603541 podman[90747]: 2026-01-31 06:50:28.811296224 +0000 UTC m=+0.147497837 container start 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:28 np0005603541 podman[90747]: 2026-01-31 06:50:28.814451832 +0000 UTC m=+0.150653475 container attach 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 01:50:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:50:29 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:50:29 np0005603541 elastic_elion[90762]: 
Jan 31 01:50:29 np0005603541 elastic_elion[90762]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 31 01:50:29 np0005603541 systemd[1]: libpod-15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3.scope: Deactivated successfully.
Jan 31 01:50:29 np0005603541 podman[90747]: 2026-01-31 06:50:29.380714987 +0000 UTC m=+0.716916620 container died 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:29 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f301aeeadce1fb3c7fc0b942cd19eba7b438ef084a6a4895318c5b9bd80550b9-merged.mount: Deactivated successfully.
Jan 31 01:50:29 np0005603541 podman[90747]: 2026-01-31 06:50:29.42085243 +0000 UTC m=+0.757054053 container remove 15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3 (image=quay.io/ceph/ceph:v18, name=elastic_elion, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:29 np0005603541 systemd[1]: libpod-conmon-15cddb8c6990c53733ac70ba420b0280f4db4deb61de07f33f663adc134248d3.scope: Deactivated successfully.
Jan 31 01:50:29 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from mgr.compute-1.hglnzn 192.168.122.101:0/148713632; not ready for session (expect reconnect)
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432985306s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.261123657s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432985306s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.261123657s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=38 pruub=10.705371857s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 92.533584595s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=38 pruub=10.705371857s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.533584595s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432564735s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.260986328s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432564735s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.260986328s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.1b( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.204056740s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.032531738s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.1b( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.204056740s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.032531738s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.667330742s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active pruub 97.495925903s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432415962s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.261009216s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.667330742s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.495925903s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432244301s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.260879517s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432244301s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.260879517s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.432415962s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.261009216s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=38 pruub=10.823138237s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 92.651870728s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=38 pruub=10.823138237s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.651870728s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.431925774s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.260765076s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.431925774s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.260765076s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.666989326s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active pruub 97.495880127s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.666989326s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.495880127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.0( empty local-lis/les=29/30 n=0 ec=17/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.462202072s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.291183472s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.431778908s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.260749817s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.0( empty local-lis/les=29/30 n=0 ec=17/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.462202072s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.291183472s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.431778908s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.260749817s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.665990829s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 active pruub 97.495063782s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=38 pruub=15.665990829s) [] r=-1 lpr=38 pi=[27,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.495063782s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.a( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.204067230s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033172607s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.a( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.204067230s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033172607s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461934090s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.291130066s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461934090s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.291130066s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.d( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203903198s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033187866s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.d( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203903198s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033187866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.c( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203966141s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033279419s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.c( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203966141s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033279419s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461653709s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.291007996s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.a( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203938484s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033317566s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.a( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203938484s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033317566s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461511612s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.290946960s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461653709s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.291007996s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461511612s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.290946960s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.14( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203812599s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033317566s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.14( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203812599s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033317566s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.10( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203772545s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033325195s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.10( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203772545s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033325195s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.13( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203739166s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033332825s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.13( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203739166s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033332825s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.15( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203705788s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033370972s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[2.15( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203705788s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033370972s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.430457115s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 97.260147095s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461048126s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.290733337s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=15.430457115s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.260147095s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461048126s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.290733337s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461114883s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 90.290946960s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=8.461114883s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.290946960s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.1d( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203383446s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 92.033424377s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 38 pg[7.1d( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=10.203383446s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.033424377s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:29 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:29 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:30 np0005603541 ansible-async_wrapper.py[90306]: Done in kid B.
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.gghdjs(active, since 2m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.hglnzn", "id": "compute-1.hglnzn"} v 0) v1
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hglnzn", "id": "compute-1.hglnzn"}]: dispatch
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:50:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:30 np0005603541 python3[90822]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:30 np0005603541 podman[90823]: 2026-01-31 06:50:30.354711132 +0000 UTC m=+0.053553384 container create 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:30 np0005603541 systemd[1]: Started libpod-conmon-23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70.scope.
Jan 31 01:50:30 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7283aaba280f46e8110390d2913af24d4207d119a05b1ccf8404847e945c2f5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7283aaba280f46e8110390d2913af24d4207d119a05b1ccf8404847e945c2f5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:30 np0005603541 podman[90823]: 2026-01-31 06:50:30.331670953 +0000 UTC m=+0.030513255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:30 np0005603541 podman[90823]: 2026-01-31 06:50:30.43029947 +0000 UTC m=+0.129141732 container init 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:50:30 np0005603541 podman[90823]: 2026-01-31 06:50:30.437628511 +0000 UTC m=+0.136470763 container start 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 01:50:30 np0005603541 podman[90823]: 2026-01-31 06:50:30.441119188 +0000 UTC m=+0.139961470 container attach 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:50:30 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:30 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:30 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 01:50:30 np0005603541 admiring_roentgen[90839]: 
Jan 31 01:50:30 np0005603541 admiring_roentgen[90839]: [{"container_id": "3a506549d180", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.72%", "created": "2026-01-31T06:48:15.384825Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T06:48:15.662152Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T06:49:11.712077Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2026-01-31T06:48:14.658564Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@crash.compute-0", "version": "18.2.7"}, {"container_id": "6672f2dbf618", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.60%", "created": "2026-01-31T06:48:55.525056Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-31T06:48:55.666969Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T06:50:29.339949Z", "memory_usage": 11712593, "ports": [], "service_name": "crash", "started": "2026-01-31T06:48:55.276307Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@crash.compute-1", "version": "18.2.7"}, {"container_id": "2a94ab53eec0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.27%", "created": "2026-01-31T06:50:08.723355Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-31T06:50:08.793698Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T06:50:29.794345Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2026-01-31T06:50:08.626131Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@crash.compute-2", "version": "18.2.7"}, {"container_id": "d0a9f4892794", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "41.77%", "created": "2026-01-31T06:47:02.240098Z", "daemon_id": "compute-0.gghdjs", "daemon_name": "mgr.compute-0.gghdjs", "daemon_type": "mgr", "events": ["2026-01-31T06:48:21.344301Z daemon:mgr.compute-0.gghdjs [INFO] \"Reconfigured mgr.compute-0.gghdjs on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T06:49:11.712001Z", "memory_usage": 546203238, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T06:47:01.681742Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@mgr.compute-0.gghdjs", "version": "18.2.7"}, {"container_id": "eb04de431f69", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "99.96%", "created": "2026-01-31T06:50:06.756448Z", "daemon_id": "compute-1.hglnzn", "daemon_name": "mgr.compute-1.hglnzn", "daemon_type": "mgr", "events": ["2026-01-31T06:50:06.895257Z daemon:mgr.compute-1.hglnzn [INFO] \"Deployed mgr.compute-1.hglnzn on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T06:50:29.340282Z", "memory_usage": 506357350, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T06:50:06.547560Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@mgr.compute-1.hglnzn", "version": "18.2.7"}, {"container_id": "75d90d04233b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "71.33%", "created": "2026-01-31T06:49:57.759063Z", "daemon_id": "compute-2.iujpur", "daemon_name": "mgr.compute-2.iujpur", "daemon_type": "mgr", "events": ["2026-01-31T06:50:04.919114Z daemon:mgr.compute-2.iujpur [INFO] \"Deployed mgr.compute-2.iujpur on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T06:50:29.794273Z", "memory_usage": 514955673, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T06:49:57.676299Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@mgr.compute-2.iujpur", "version": "18.2.7"}, {"container_id": "ea2bfa427050", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.47%", "created": "2026-01-31T06:46:56.194954Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T06:48:20.574099Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T06:49:11.711881Z", "memory_request": 2147483648, "memory_usage": 34372321, "ports": [], "service_name": "mon", "started": "2026-01-31T06:46:58.580273Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a@mon.compute-0", "version": "18.2.7"}, {"container_id": "07192c2211e5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.43%", "created": "2026-01-31T06:49:53.285093Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-31T06:49:56.068067Z daemon
Jan 31 01:50:30 np0005603541 systemd[1]: libpod-23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70.scope: Deactivated successfully.
Jan 31 01:50:31 np0005603541 podman[90864]: 2026-01-31 06:50:31.039215261 +0000 UTC m=+0.033735835 container died 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:31 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7283aaba280f46e8110390d2913af24d4207d119a05b1ccf8404847e945c2f5b-merged.mount: Deactivated successfully.
Jan 31 01:50:31 np0005603541 podman[90864]: 2026-01-31 06:50:31.072685108 +0000 UTC m=+0.067205682 container remove 23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70 (image=quay.io/ceph/ceph:v18, name=admiring_roentgen, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:31 np0005603541 systemd[1]: libpod-conmon-23960bcb275a94614cbd1046daee6eebbcced02102ff924c12d6b8907bb39a70.scope: Deactivated successfully.
Jan 31 01:50:31 np0005603541 rsyslogd[1004]: message too long (12805) with configured size 8096, begin of message is: [{"container_id": "3a506549d180", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 01:50:31 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:31 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:32 np0005603541 python3[90904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.051125573 +0000 UTC m=+0.039838506 container create 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 01:50:32 np0005603541 systemd[1]: Started libpod-conmon-2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9.scope.
Jan 31 01:50:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 01:50:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73012f6a9b4aaed72147922fc5ba3a50886fbc1584092ffd8ea7de1b68f82f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73012f6a9b4aaed72147922fc5ba3a50886fbc1584092ffd8ea7de1b68f82f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.030453342 +0000 UTC m=+0.019166345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.13518343 +0000 UTC m=+0.123896373 container init 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.139948378 +0000 UTC m=+0.128661301 container start 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.143021864 +0000 UTC m=+0.131734797 container attach 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:32 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:32 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 01:50:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803010226' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 01:50:32 np0005603541 inspiring_noether[90920]: 
Jan 31 01:50:32 np0005603541 inspiring_noether[90920]: {"fsid":"ef73c6e0-6d85-55c2-9347-1f544d3e3d3a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":28,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":2,"osd_up_since":1769842176,"num_in_osds":3,"osd_in_since":1769842212,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56184832,"bytes_avail":14967812096,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T06:50:14.257446+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 01:50:32 np0005603541 systemd[1]: libpod-2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9.scope: Deactivated successfully.
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.770492863 +0000 UTC m=+0.759205796 container died 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d73012f6a9b4aaed72147922fc5ba3a50886fbc1584092ffd8ea7de1b68f82f1-merged.mount: Deactivated successfully.
Jan 31 01:50:32 np0005603541 podman[90905]: 2026-01-31 06:50:32.973627954 +0000 UTC m=+0.962340877 container remove 2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9 (image=quay.io/ceph/ceph:v18, name=inspiring_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 01:50:33 np0005603541 systemd[1]: libpod-conmon-2cd5fe76f2456516bc52c3abe737857005a82eed96acae784efcf9288c98afe9.scope: Deactivated successfully.
Jan 31 01:50:33 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Jan 31 01:50:33 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Jan 31 01:50:33 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:33 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:33 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:33 np0005603541 python3[90982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:33 np0005603541 podman[90983]: 2026-01-31 06:50:33.912399958 +0000 UTC m=+0.081850274 container create e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:33 np0005603541 podman[90983]: 2026-01-31 06:50:33.851980254 +0000 UTC m=+0.021430580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:33 np0005603541 systemd[1]: Started libpod-conmon-e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4.scope.
Jan 31 01:50:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4bdb2c44c38f2e8123db3515be23d9e8cf2b812ef2ed22e4c155b57ce6a776/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4bdb2c44c38f2e8123db3515be23d9e8cf2b812ef2ed22e4c155b57ce6a776/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:34 np0005603541 podman[90983]: 2026-01-31 06:50:34.05447513 +0000 UTC m=+0.223925476 container init e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:50:34 np0005603541 podman[90983]: 2026-01-31 06:50:34.059884863 +0000 UTC m=+0.229335169 container start e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:50:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:34 np0005603541 podman[90983]: 2026-01-31 06:50:34.07712766 +0000 UTC m=+0.246577986 container attach e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:34 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:34 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:34 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 01:50:34 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673261415' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 01:50:34 np0005603541 wonderful_buck[90998]: 
Jan 31 01:50:34 np0005603541 wonderful_buck[90998]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502906777","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd.1","name":"osd_mclock_max_capacity_iops_hdd","value":"452.440518","level":"basic","can_update_at_runtime":true,"mask":""}]
Jan 31 01:50:34 np0005603541 systemd[1]: libpod-e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4.scope: Deactivated successfully.
Jan 31 01:50:34 np0005603541 podman[90983]: 2026-01-31 06:50:34.576912673 +0000 UTC m=+0.746362999 container died e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4e4bdb2c44c38f2e8123db3515be23d9e8cf2b812ef2ed22e4c155b57ce6a776-merged.mount: Deactivated successfully.
Jan 31 01:50:34 np0005603541 podman[90983]: 2026-01-31 06:50:34.618539631 +0000 UTC m=+0.787989927 container remove e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:50:34 np0005603541 systemd[1]: libpod-conmon-e340ea35fc7e902080b9ada385aa4a039a9f4a9e024471b98d75d24050a0aad4.scope: Deactivated successfully.
Jan 31 01:50:35 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:35 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:35 np0005603541 python3[91061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:35 np0005603541 podman[91062]: 2026-01-31 06:50:35.612721024 +0000 UTC m=+0.036589395 container create ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:50:35 np0005603541 systemd[1]: Started libpod-conmon-ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314.scope.
Jan 31 01:50:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8000f1b50489ac26ed2575ee95b758f8272de0e801eda2a1f865efcac33cef8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8000f1b50489ac26ed2575ee95b758f8272de0e801eda2a1f865efcac33cef8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:35 np0005603541 podman[91062]: 2026-01-31 06:50:35.662218668 +0000 UTC m=+0.086087079 container init ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 01:50:35 np0005603541 podman[91062]: 2026-01-31 06:50:35.666139615 +0000 UTC m=+0.090007986 container start ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:35 np0005603541 podman[91062]: 2026-01-31 06:50:35.669328104 +0000 UTC m=+0.093196595 container attach ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:35 np0005603541 podman[91062]: 2026-01-31 06:50:35.595260523 +0000 UTC m=+0.019128924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 01:50:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 01:50:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 31 01:50:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025839986' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 01:50:36 np0005603541 flamboyant_villani[91078]: mimic
Jan 31 01:50:36 np0005603541 systemd[1]: libpod-ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314.scope: Deactivated successfully.
Jan 31 01:50:36 np0005603541 podman[91062]: 2026-01-31 06:50:36.22308315 +0000 UTC m=+0.646951521 container died ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:50:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:36 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8000f1b50489ac26ed2575ee95b758f8272de0e801eda2a1f865efcac33cef8a-merged.mount: Deactivated successfully.
Jan 31 01:50:36 np0005603541 podman[91062]: 2026-01-31 06:50:36.371810067 +0000 UTC m=+0.795678438 container remove ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314 (image=quay.io/ceph/ceph:v18, name=flamboyant_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:50:36 np0005603541 systemd[1]: libpod-conmon-ab3b60cfebfa1904e065f6f2b3363a2d52b0990cbf09981880a9c73e3a953314.scope: Deactivated successfully.
Jan 31 01:50:36 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:36 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:37 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 01:50:37 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 01:50:37 np0005603541 python3[91141]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.26810347 +0000 UTC m=+0.031821567 container create 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:50:37 np0005603541 systemd[1]: Started libpod-conmon-8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80.scope.
Jan 31 01:50:37 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eb349dad9a813bf86d0c3740260ace1fb6840cb27f615259972f9363ee4cf3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eb349dad9a813bf86d0c3740260ace1fb6840cb27f615259972f9363ee4cf3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.318778613 +0000 UTC m=+0.082496740 container init 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.32313739 +0000 UTC m=+0.086855497 container start 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.326482133 +0000 UTC m=+0.090200270 container attach 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.254800981 +0000 UTC m=+0.018519108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:50:37 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:37 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 31 01:50:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596617942' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 01:50:37 np0005603541 vibrant_boyd[91157]: 
Jan 31 01:50:37 np0005603541 systemd[1]: libpod-8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80.scope: Deactivated successfully.
Jan 31 01:50:37 np0005603541 vibrant_boyd[91157]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":8}}
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.929330674 +0000 UTC m=+0.693048781 container died 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:37 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b8eb349dad9a813bf86d0c3740260ace1fb6840cb27f615259972f9363ee4cf3-merged.mount: Deactivated successfully.
Jan 31 01:50:37 np0005603541 podman[91142]: 2026-01-31 06:50:37.964116274 +0000 UTC m=+0.727834381 container remove 8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80 (image=quay.io/ceph/ceph:v18, name=vibrant_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 01:50:37 np0005603541 systemd[1]: libpod-conmon-8a6e954d36b1653e986296efdfcb2f8ce58403b029e63c908e7d094f1dda9f80.scope: Deactivated successfully.
Jan 31 01:50:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:38 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:38 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:39 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:39 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 01:50:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 01:50:40 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:40 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:41 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 01:50:41 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 01:50:41 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:41 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:41 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:42 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:42 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:43 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:43 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v119: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:44 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:44 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:45 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:45 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v120: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:46 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:46 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:47 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:47 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:50:48
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 10/10 changes
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] Executing plan auto_2026-01-31_06:50:48
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.0 mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.a mappings [{'from': 1, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.b mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.13 mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.14 mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 6.1f mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 7.6 mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 7.c mappings [{'from': 1, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 7.10 mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [balancer INFO root] ceph osd pg-upmap-items 7.1e mappings [{'from': 0, 'to': 2}]
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v121: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 01:50:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]': finished
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 crush map has features 3314933000854323200, adjusting msgr requires
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:49 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 01:50:49 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:49 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]': finished
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v123: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:50 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 01:50:50 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 3fd12547-f3c6-4faa-87ae-86d5caa630c5 does not exist
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev a8d97d80-0ea6-4529-8aee-fb92afb23ce1 does not exist
Jan 31 01:50:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev cdc4f46b-c663-4c4c-bd58-5543d60457d7 does not exist
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.228024674 +0000 UTC m=+0.038361019 container create 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:50:51 np0005603541 systemd[1]: Started libpod-conmon-238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9.scope.
Jan 31 01:50:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.278402879 +0000 UTC m=+0.088739234 container init 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.286268714 +0000 UTC m=+0.096605069 container start 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:50:51 np0005603541 hopeful_hypatia[92195]: 167 167
Jan 31 01:50:51 np0005603541 systemd[1]: libpod-238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9.scope: Deactivated successfully.
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.290815316 +0000 UTC m=+0.101151671 container attach 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.292458407 +0000 UTC m=+0.102794742 container died 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.212122211 +0000 UTC m=+0.022458566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c5c4920621f8b88a8efab73b03135f7c2f8bcb4c734b28c40d8a0011af6153e0-merged.mount: Deactivated successfully.
Jan 31 01:50:51 np0005603541 podman[92179]: 2026-01-31 06:50:51.326783145 +0000 UTC m=+0.137119480 container remove 238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 01:50:51 np0005603541 systemd[1]: libpod-conmon-238050df65f7412fdfa5f3120bc272a64b545f67e4fe8db510820911670562e9.scope: Deactivated successfully.
Jan 31 01:50:51 np0005603541 podman[92219]: 2026-01-31 06:50:51.456763947 +0000 UTC m=+0.052024006 container create 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:50:51 np0005603541 systemd[1]: Started libpod-conmon-3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa.scope.
Jan 31 01:50:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:51 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:51 np0005603541 podman[92219]: 2026-01-31 06:50:51.52645001 +0000 UTC m=+0.121710159 container init 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:50:51 np0005603541 podman[92219]: 2026-01-31 06:50:51.434747244 +0000 UTC m=+0.030007393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:51 np0005603541 podman[92219]: 2026-01-31 06:50:51.532200522 +0000 UTC m=+0.127460591 container start 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:51 np0005603541 podman[92219]: 2026-01-31 06:50:51.535718169 +0000 UTC m=+0.130978268 container attach 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:51 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:51 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 39 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 39 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 39 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.462121964s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 116.534255981s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.462121964s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.534255981s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.961147308s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 116.033470154s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=31/32 n=0 ec=19/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.580303192s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 116.652687073s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.961147308s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033470154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=31/32 n=0 ec=19/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.580303192s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652687073s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.1e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.960892677s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 116.033401489s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.1e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.960892677s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033401489s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579844475s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 116.652725220s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579844475s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652725220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.10( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.960870743s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 116.033882141s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579741478s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 116.652816772s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[7.10( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=11.960870743s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033882141s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579721451s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 116.652809143s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579741478s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652816772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:51 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=12.579721451s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652809143s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:52 np0005603541 naughty_albattani[92236]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:50:52 np0005603541 naughty_albattani[92236]: --> relative data size: 1.0
Jan 31 01:50:52 np0005603541 naughty_albattani[92236]: --> All data devices are unavailable
Jan 31 01:50:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v124: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 01:50:52 np0005603541 systemd[1]: libpod-3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa.scope: Deactivated successfully.
Jan 31 01:50:52 np0005603541 podman[92219]: 2026-01-31 06:50:52.277545955 +0000 UTC m=+0.872806044 container died 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 01:50:52 np0005603541 systemd[1]: var-lib-containers-storage-overlay-52a62497151e32f17ae4f2a5c252506eaf0058fde578b2795725fbadea436586-merged.mount: Deactivated successfully.
Jan 31 01:50:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 01:50:52 np0005603541 podman[92219]: 2026-01-31 06:50:52.334551104 +0000 UTC m=+0.929811163 container remove 3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 01:50:52 np0005603541 systemd[1]: libpod-conmon-3a0ac32184bed320c85ab7378701f32735dbcd0ae589d358bf96df6c72f820fa.scope: Deactivated successfully.
Jan 31 01:50:52 np0005603541 ceph-mgr[74648]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1739985396; not ready for session (expect reconnect)
Jan 31 01:50:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:52 np0005603541 ceph-mgr[74648]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.814969999 +0000 UTC m=+0.040231936 container create cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:52 np0005603541 systemd[1]: Started libpod-conmon-cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca.scope.
Jan 31 01:50:52 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.873877215 +0000 UTC m=+0.099139162 container init cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.883500892 +0000 UTC m=+0.108762819 container start cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 01:50:52 np0005603541 goofy_ellis[92423]: 167 167
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.887716767 +0000 UTC m=+0.112978714 container attach cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:52 np0005603541 systemd[1]: libpod-cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca.scope: Deactivated successfully.
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.888738511 +0000 UTC m=+0.114000438 container died cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.798633975 +0000 UTC m=+0.023895902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:52 np0005603541 systemd[1]: var-lib-containers-storage-overlay-207e0b9bec245aa64ce8750739e6e0f44cae9e25da58679f85f312c88efa1fdc-merged.mount: Deactivated successfully.
Jan 31 01:50:52 np0005603541 podman[92407]: 2026-01-31 06:50:52.975269721 +0000 UTC m=+0.200531638 container remove cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 01:50:52 np0005603541 systemd[1]: libpod-conmon-cc0c7205c7968583feae019ed8a990cb70f248b3e996f69654b81b5f0f9ecaca.scope: Deactivated successfully.
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.106502814 +0000 UTC m=+0.045100085 container create 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:50:53 np0005603541 systemd[1]: Started libpod-conmon-14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c.scope.
Jan 31 01:50:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa686ad1908b8d1747762b24455c67394bd5cf0c17d6805b23af891829faa458/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa686ad1908b8d1747762b24455c67394bd5cf0c17d6805b23af891829faa458/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa686ad1908b8d1747762b24455c67394bd5cf0c17d6805b23af891829faa458/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa686ad1908b8d1747762b24455c67394bd5cf0c17d6805b23af891829faa458/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.172758402 +0000 UTC m=+0.111355693 container init 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.178810751 +0000 UTC m=+0.117408022 container start 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.183745913 +0000 UTC m=+0.122343184 container attach 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.089831082 +0000 UTC m=+0.028428373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: OSD bench result of 4413.448469 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396] boot
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 01:50:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]: {
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:    "0": [
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:        {
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "devices": [
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "/dev/loop3"
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            ],
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "lv_name": "ceph_lv0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "lv_size": "7511998464",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "name": "ceph_lv0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "tags": {
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.cluster_name": "ceph",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.crush_device_class": "",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.encrypted": "0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.osd_id": "0",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.type": "block",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:                "ceph.vdo": "0"
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            },
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "type": "block",
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:            "vg_name": "ceph_vg0"
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:        }
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]:    ]
Jan 31 01:50:53 np0005603541 jovial_clarke[92463]: }
Jan 31 01:50:53 np0005603541 systemd[1]: libpod-14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c.scope: Deactivated successfully.
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.913273365 +0000 UTC m=+0.851870636 container died 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fa686ad1908b8d1747762b24455c67394bd5cf0c17d6805b23af891829faa458-merged.mount: Deactivated successfully.
Jan 31 01:50:53 np0005603541 podman[92446]: 2026-01-31 06:50:53.957067237 +0000 UTC m=+0.895664508 container remove 14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_clarke, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:53 np0005603541 systemd[1]: libpod-conmon-14266badc5ca0199869d08efab764493da6a8718844edf01a397619ef218127c.scope: Deactivated successfully.
Jan 31 01:50:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v126: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 01:50:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 01:50:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 01:50:54 np0005603541 ceph-mon[74355]: osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396] boot
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.1b( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.1e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.260021210s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033401489s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.760852814s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.534255981s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.1e( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259985924s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033401489s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.760722160s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.534255981s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259654045s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033470154s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259615898s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033470154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=31/32 n=0 ec=19/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.878691673s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652687073s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=31/32 n=0 ec=19/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.878662109s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652687073s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.0( empty local-lis/les=29/30 n=0 ec=17/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.0( empty local-lis/les=29/30 n=0 ec=17/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.a( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=40) [2] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.a( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.d( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.d( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.c( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.a( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.b( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.8( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.10( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.878116608s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652809143s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.10( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.a( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.878031731s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652809143s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.14( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.877915382s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652725220s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.14( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.877871513s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652725220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.10( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.258929253s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033882141s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[2.15( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[2.15( empty local-lis/les=35/36 n=0 ec=26/13 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.10( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.258896828s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.033882141s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.12( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[5.13( empty local-lis/les=29/30 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=40) [2] r=-1 lpr=40 pi=[29,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.877672195s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652816772s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=40 pruub=9.877597809s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 116.652816772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 40 pg[7.1d( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:50:54 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 41 pg[7.1d( empty local-lis/les=35/36 n=0 ec=31/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.476309241 +0000 UTC m=+0.038367099 container create d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:50:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:50:54 np0005603541 systemd[1]: Started libpod-conmon-d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1.scope.
Jan 31 01:50:54 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.548069705 +0000 UTC m=+0.110127583 container init d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.554099905 +0000 UTC m=+0.116157753 container start d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.556710349 +0000 UTC m=+0.118768227 container attach d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.459508386 +0000 UTC m=+0.021566274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:54 np0005603541 strange_edison[92638]: 167 167
Jan 31 01:50:54 np0005603541 systemd[1]: libpod-d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1.scope: Deactivated successfully.
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.558270318 +0000 UTC m=+0.120328196 container died d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 01:50:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b9f67580eb42fe459fecc5cad046a8899be049a887a9f2a320fff37f87b6e1c3-merged.mount: Deactivated successfully.
Jan 31 01:50:54 np0005603541 podman[92622]: 2026-01-31 06:50:54.597841325 +0000 UTC m=+0.159899193 container remove d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:54 np0005603541 systemd[1]: libpod-conmon-d4abe288fe8a8b0290a1ab80f01896a289341b283ba5b6dde4bd5c41a797acb1.scope: Deactivated successfully.
Jan 31 01:50:54 np0005603541 podman[92661]: 2026-01-31 06:50:54.724453334 +0000 UTC m=+0.043273650 container create b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:50:54 np0005603541 systemd[1]: Started libpod-conmon-b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2.scope.
Jan 31 01:50:54 np0005603541 podman[92661]: 2026-01-31 06:50:54.701394585 +0000 UTC m=+0.020214931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:50:54 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:50:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a75230307beaf085772d9462be4fa89dd8f9a26d66816242d76a6f0e6587e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a75230307beaf085772d9462be4fa89dd8f9a26d66816242d76a6f0e6587e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a75230307beaf085772d9462be4fa89dd8f9a26d66816242d76a6f0e6587e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a75230307beaf085772d9462be4fa89dd8f9a26d66816242d76a6f0e6587e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:50:54 np0005603541 podman[92661]: 2026-01-31 06:50:54.814158832 +0000 UTC m=+0.132979168 container init b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:54 np0005603541 podman[92661]: 2026-01-31 06:50:54.819649377 +0000 UTC m=+0.138469693 container start b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:50:54 np0005603541 podman[92661]: 2026-01-31 06:50:54.823154485 +0000 UTC m=+0.141974871 container attach b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]: {
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:        "osd_id": 0,
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:        "type": "bluestore"
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]:    }
Jan 31 01:50:55 np0005603541 goofy_margulis[92677]: }
Jan 31 01:50:55 np0005603541 systemd[1]: libpod-b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2.scope: Deactivated successfully.
Jan 31 01:50:55 np0005603541 podman[92661]: 2026-01-31 06:50:55.561053533 +0000 UTC m=+0.879873859 container died b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:50:55 np0005603541 systemd[1]: var-lib-containers-storage-overlay-85a75230307beaf085772d9462be4fa89dd8f9a26d66816242d76a6f0e6587e1-merged.mount: Deactivated successfully.
Jan 31 01:50:55 np0005603541 podman[92661]: 2026-01-31 06:50:55.669147955 +0000 UTC m=+0.987968301 container remove b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:50:55 np0005603541 systemd[1]: libpod-conmon-b948a68d78a7be78f953e4fd8719edc1d554c5e624f9360388bc60e59ac5e1e2.scope: Deactivated successfully.
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:55 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev b89794a1-e321-4896-b3aa-37d42e83468a (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:55 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.fbgckm on compute-2
Jan 31 01:50:55 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.fbgckm on compute-2
Jan 31 01:50:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v128: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:56 np0005603541 ceph-mon[74355]: Deploying daemon rgw.rgw.compute-2.fbgckm on compute-2
Jan 31 01:50:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Jan 31 01:50:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Jan 31 01:50:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v129: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:50:58 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.izlkft on compute-1
Jan 31 01:50:58 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.izlkft on compute-1
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:50:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:50:59 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 01:50:59 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 01:50:59 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 42 pg[8.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [0] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: Deploying daemon rgw.rgw.compute-1.izlkft on compute-1
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 01:50:59 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:51:00 np0005603541 ceph-mgr[74648]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v131: 194 pgs: 1 unknown, 1 active+clean+laggy, 192 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:00 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ibblfd on compute-0
Jan 31 01:51:00 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ibblfd on compute-0
Jan 31 01:51:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Jan 31 01:51:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 01:51:00 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 43 pg[8.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [0] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.862217862 +0000 UTC m=+0.034489264 container create 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:51:00 np0005603541 systemd[1]: Started libpod-conmon-4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2.scope.
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: Deploying daemon rgw.rgw.compute-0.ibblfd on compute-0
Jan 31 01:51:00 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 01:51:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.933191937 +0000 UTC m=+0.105463389 container init 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.93985745 +0000 UTC m=+0.112128872 container start 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.84471858 +0000 UTC m=+0.016990032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:00 np0005603541 quirky_lalande[92874]: 167 167
Jan 31 01:51:00 np0005603541 systemd[1]: libpod-4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2.scope: Deactivated successfully.
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.943766117 +0000 UTC m=+0.116037529 container attach 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.944496696 +0000 UTC m=+0.116768108 container died 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e07693fb815099ba99cbeb199f79f5900206400aafa1fb89983916a381e6442d-merged.mount: Deactivated successfully.
Jan 31 01:51:00 np0005603541 podman[92853]: 2026-01-31 06:51:00.982148037 +0000 UTC m=+0.154419449 container remove 4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:00 np0005603541 systemd[1]: libpod-conmon-4db49b5e80bf3391ee986fadaa88c973e81d430da1aeae8808e7d5ee7bafbef2.scope: Deactivated successfully.
Jan 31 01:51:01 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:01 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:01 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:51:01 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:01 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:01 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:01 np0005603541 systemd[1]: Starting Ceph rgw.rgw.compute-0.ibblfd for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:51:01 np0005603541 podman[93017]: 2026-01-31 06:51:01.707050924 +0000 UTC m=+0.036256827 container create b8d4bbac272ff3119ba9b83679c433387ce9faabe3b0b3d3247455c88eba7de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-0-ibblfd, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 01:51:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01a96be9220fda11a005083047806198953c9aff3fd73fe86eaccaf53514195/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01a96be9220fda11a005083047806198953c9aff3fd73fe86eaccaf53514195/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01a96be9220fda11a005083047806198953c9aff3fd73fe86eaccaf53514195/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:01 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01a96be9220fda11a005083047806198953c9aff3fd73fe86eaccaf53514195/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ibblfd supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 01:51:01 np0005603541 podman[93017]: 2026-01-31 06:51:01.759522232 +0000 UTC m=+0.088728145 container init b8d4bbac272ff3119ba9b83679c433387ce9faabe3b0b3d3247455c88eba7de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-0-ibblfd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 01:51:01 np0005603541 podman[93017]: 2026-01-31 06:51:01.769100449 +0000 UTC m=+0.098306382 container start b8d4bbac272ff3119ba9b83679c433387ce9faabe3b0b3d3247455c88eba7de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-0-ibblfd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 01:51:01 np0005603541 bash[93017]: b8d4bbac272ff3119ba9b83679c433387ce9faabe3b0b3d3247455c88eba7de9
Jan 31 01:51:01 np0005603541 podman[93017]: 2026-01-31 06:51:01.689230074 +0000 UTC m=+0.018436017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 systemd[1]: Started Ceph rgw.rgw.compute-0.ibblfd for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:51:01 np0005603541 radosgw[93037]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:01 np0005603541 radosgw[93037]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 01:51:01 np0005603541 radosgw[93037]: framework: beast
Jan 31 01:51:01 np0005603541 radosgw[93037]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 01:51:01 np0005603541 radosgw[93037]: init_numa not setting numa affinity
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:01 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 44 pg[9.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev b89794a1-e321-4896-b3aa-37d42e83468a (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event b89794a1-e321-4896-b3aa-37d42e83468a (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 633e579c-bf8a-4a13-bdb8-d1e6e1253155 (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.wcykmw on compute-2
Jan 31 01:51:01 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.wcykmw on compute-2
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v134: 195 pgs: 2 unknown, 1 active+clean+laggy, 192 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 01:51:02 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 45 pg[9.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: Deploying daemon mds.cephfs.compute-2.wcykmw on compute-2
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 01:51:02 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.kanoes on compute-0
Jan 31 01:51:03 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.kanoes on compute-0
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e3 new map
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:50:15.676874+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.wcykmw{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:boot
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] as mds.0
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wcykmw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 01:51:03 np0005603541 podman[93246]: 2026-01-31 06:51:03.941606496 +0000 UTC m=+0.046617563 container create b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.wcykmw"} v 0) v1
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wcykmw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e4 new map
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:03.941771+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 01:51:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:creating}
Jan 31 01:51:03 np0005603541 systemd[1]: Started libpod-conmon-b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e.scope.
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wcykmw is now active in filesystem cephfs as rank 0
Jan 31 01:51:04 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:03.919393657 +0000 UTC m=+0.024404774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:04.029055618 +0000 UTC m=+0.134066715 container init b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:04.037286591 +0000 UTC m=+0.142297668 container start b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:04.042705225 +0000 UTC m=+0.147716332 container attach b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:51:04 np0005603541 systemd[1]: libpod-b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e.scope: Deactivated successfully.
Jan 31 01:51:04 np0005603541 compassionate_golick[93262]: 167 167
Jan 31 01:51:04 np0005603541 conmon[93262]: conmon b8f1492c976c5eb25fc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e.scope/container/memory.events
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:04.045639248 +0000 UTC m=+0.150650335 container died b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-bf45f7e54441c69e3ea7b3ac8a5476d6c5d7a02df4abaf5b1353e781b6340897-merged.mount: Deactivated successfully.
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:04 np0005603541 podman[93246]: 2026-01-31 06:51:04.084250941 +0000 UTC m=+0.189262018 container remove b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:51:04 np0005603541 systemd[1]: libpod-conmon-b8f1492c976c5eb25fc934ce89ec488e61261f4a2dd6e73ccebafadfad6d050e.scope: Deactivated successfully.
Jan 31 01:51:04 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:04 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v137: 196 pgs: 1 unknown, 1 active+clean+laggy, 194 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 01:51:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Jan 31 01:51:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Jan 31 01:51:04 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:04 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:04 np0005603541 systemd[1]: Starting Ceph mds.cephfs.compute-0.kanoes for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:51:04 np0005603541 podman[93406]: 2026-01-31 06:51:04.78735434 +0000 UTC m=+0.039037705 container create 0f922a52f6bf877a65c3a12572f0bbe37322248fb41ac91ce3c47a857249d29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-0-kanoes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 01:51:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfd98f99a0ec33ddaa0b8bd023704929066fa8d373a43986b42dd93557cb500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfd98f99a0ec33ddaa0b8bd023704929066fa8d373a43986b42dd93557cb500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfd98f99a0ec33ddaa0b8bd023704929066fa8d373a43986b42dd93557cb500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfd98f99a0ec33ddaa0b8bd023704929066fa8d373a43986b42dd93557cb500/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.kanoes supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:04 np0005603541 podman[93406]: 2026-01-31 06:51:04.860150259 +0000 UTC m=+0.111833654 container init 0f922a52f6bf877a65c3a12572f0bbe37322248fb41ac91ce3c47a857249d29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-0-kanoes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 31 01:51:04 np0005603541 podman[93406]: 2026-01-31 06:51:04.770793971 +0000 UTC m=+0.022477356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:04 np0005603541 podman[93406]: 2026-01-31 06:51:04.871847199 +0000 UTC m=+0.123530564 container start 0f922a52f6bf877a65c3a12572f0bbe37322248fb41ac91ce3c47a857249d29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-0-kanoes, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 01:51:04 np0005603541 bash[93406]: 0f922a52f6bf877a65c3a12572f0bbe37322248fb41ac91ce3c47a857249d29f
Jan 31 01:51:04 np0005603541 systemd[1]: Started Ceph mds.cephfs.compute-0.kanoes for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:51:04 np0005603541 ceph-mds[93426]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:51:04 np0005603541 ceph-mds[93426]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 01:51:04 np0005603541 ceph-mds[93426]: main not setting numa affinity
Jan 31 01:51:04 np0005603541 ceph-mds[93426]: pidfile_write: ignore empty --pid-file
Jan 31 01:51:04 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-0-kanoes[93422]: starting mds.cephfs.compute-0.kanoes at 
Jan 31 01:51:04 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Updating MDS map to version 4 from mon.0
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: Deploying daemon mds.cephfs.compute-0.kanoes on compute-0
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: daemon mds.cephfs.compute-2.wcykmw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: daemon mds.cephfs.compute-2.wcykmw is now active in filesystem cephfs as rank 0
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 01:51:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e5 new map
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:04.968424+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:05 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Updating MDS map to version 5 from mon.0
Jan 31 01:51:05 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Monitors have assigned me to become a standby.
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:active
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] up:boot
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 1 up:standby
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.kanoes"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kanoes"}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e5 all = 0
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e6 new map
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:04.968424+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 1 up:standby
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.hhzmle on compute-1
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.hhzmle on compute-1
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 13 completed events
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:51:05 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:51:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: Deploying daemon mds.cephfs.compute-1.hhzmle on compute-1
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/2282082505' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 2 unknown, 1 active+clean+laggy, 194 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 01:51:06 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 48 pg[11.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 01:51:06 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev 1e0c03b5-b76e-4f21-ab8e-1492206db863 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 49 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 633e579c-bf8a-4a13-bdb8-d1e6e1253155 (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 633e579c-bf8a-4a13-bdb8-d1e6e1253155 (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.102:0/2282082505' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev be652d0a-bfeb-4d0b-ab13-6a49d03ee080 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.dsjekd on compute-0
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.dsjekd on compute-0
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 01:51:07 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev f6d5eb40-0417-4a57-b60e-0c8cbefa569d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:51:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e7 new map
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:04.968424+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] up:boot
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.hhzmle"} v 0) v1
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hhzmle"}]: dispatch
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e7 all = 0
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: Deploying daemon haproxy.rgw.default.compute-0.dsjekd on compute-0
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v143: 197 pgs: 1 unknown, 1 active+clean+laggy, 195 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 511 B/s wr, 2 op/s
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:08 np0005603541 radosgw[93037]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 01:51:08 np0005603541 radosgw[93037]: framework: beast
Jan 31 01:51:08 np0005603541 radosgw[93037]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 01:51:08 np0005603541 radosgw[93037]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 01:51:08 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-0-ibblfd[93033]: 2026-01-31T06:51:08.739+0000 7f398dc60940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 01:51:08 np0005603541 radosgw[93037]: starting handler: beast
Jan 31 01:51:08 np0005603541 radosgw[93037]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: mgrc service_daemon_register rgw.14367 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ibblfd,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=5b2cd03f-b7da-4851-9803-ae95ec73332f,zone_name=default,zonegroup_id=496e4318-7ebf-46fe-91cf-296e443f34ee,zonegroup_name=default}
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 01:51:08 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev b4352dc4-f11e-48c6-b264-a9b1c6c41994 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 01:51:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e8 new map
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:09.005628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:09 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Updating MDS map to version 8 from mon.0
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:active
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] up:standby
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 51 pg[8.0( v 43'8 (0'0,43'8] local-lis/les=42/43 n=6 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51 pruub=15.729331970s) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 lcod 43'7 mlcod 43'7 active pruub 137.092285156s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 51 pg[9.0( v 50'431 (0'0,50'431] local-lis/les=44/45 n=77 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=51 pruub=9.753387451s) [0] r=0 lpr=51 pi=[44,51)/1 luod=50'427 crt=50'431 lcod 50'426 mlcod 50'426 active pruub 131.117385864s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 51 pg[8.0( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51 pruub=15.729331970s) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 lcod 43'7 mlcod 0'0 unknown pruub 137.092285156s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 51 pg[9.0( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=51 pruub=9.753387451s) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 50'426 mlcod 0'0 unknown pruub 131.117385864s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 01:51:09 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.905281571 +0000 UTC m=+2.291435329 container create f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:09 np0005603541 systemd[1]: Started libpod-conmon-f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032.scope.
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.892024533 +0000 UTC m=+2.278178281 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 01:51:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.980417428 +0000 UTC m=+2.366571156 container init f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.986379366 +0000 UTC m=+2.372533084 container start f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:09 np0005603541 festive_lovelace[94262]: 0 0
Jan 31 01:51:09 np0005603541 systemd[1]: libpod-f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032.scope: Deactivated successfully.
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.990098107 +0000 UTC m=+2.376251825 container attach f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:09 np0005603541 podman[93601]: 2026-01-31 06:51:09.990661281 +0000 UTC m=+2.376815009 container died f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 01:51:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] update: starting ev dac6f08a-de99-4377-9d30-adbbeacec7e0 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev 1e0c03b5-b76e-4f21-ab8e-1492206db863 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 1e0c03b5-b76e-4f21-ab8e-1492206db863 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev f6d5eb40-0417-4a57-b60e-0c8cbefa569d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event f6d5eb40-0417-4a57-b60e-0c8cbefa569d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev b4352dc4-f11e-48c6-b264-a9b1c6c41994 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event b4352dc4-f11e-48c6-b264-a9b1c6c41994 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev dac6f08a-de99-4377-9d30-adbbeacec7e0 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 01:51:09 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event dac6f08a-de99-4377-9d30-adbbeacec7e0 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.11( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.10( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.5( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.4( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.14( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.14( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.15( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.15( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.17( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.16( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.16( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.17( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.11( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.10( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.3( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.2( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.3( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.2( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.f( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.e( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.9( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.8( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.8( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.9( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.b( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.f( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.e( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.a( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.c( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.d( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.d( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.b( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.a( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.c( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1( v 43'8 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.6( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.7( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.7( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.6( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.5( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.4( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1a( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1b( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1b( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1a( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.18( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.19( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.18( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.19( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1f( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1e( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1f( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1e( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1c( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1d( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1c( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1d( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.12( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.13( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.12( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=42/43 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.13( v 50'431 lc 0'0 (0'0,50'431] local-lis/les=44/45 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.10( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.5( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 systemd[1]: var-lib-containers-storage-overlay-29160eb15ac7ecc069868583c2731a61c5c5e46c1abb996caab8a1924cb3fe4f-merged.mount: Deactivated successfully.
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.14( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.14( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.17( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.11( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.15( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.16( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.17( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.11( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.16( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.10( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.3( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.3( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.2( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.e( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.9( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.2( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.15( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.8( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.4( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.9( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.b( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.f( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.c( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.e( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.a( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.d( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.0( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 50'426 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.0( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 43'7 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.d( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.7( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.7( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.6( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.5( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.6( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.4( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1( v 50'431 (0'0,50'431] local-lis/les=51/52 n=3 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1b( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1a( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.18( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.a( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1a( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.18( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.19( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1e( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1f( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1c( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1d( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.12( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.13( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.1e( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.1d( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.12( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[9.13( v 50'431 (0'0,50'431] local-lis/les=51/52 n=2 ec=51/44 lis/c=44/44 les/c/f=45/45/0 sis=51) [0] r=0 lpr=51 pi=[44,51)/1 crt=50'431 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 52 pg[8.19( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=42/42 les/c/f=43/43/0 sis=51) [0] r=0 lpr=51 pi=[42,51)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:10 np0005603541 podman[93601]: 2026-01-31 06:51:10.035580062 +0000 UTC m=+2.421733780 container remove f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032 (image=quay.io/ceph/haproxy:2.3, name=festive_lovelace)
Jan 31 01:51:10 np0005603541 systemd[1]: libpod-conmon-f83e5e249c87e2cd958ef9b115cfc6ca2f0e5235a09a96e2194af447e6356032.scope: Deactivated successfully.
Jan 31 01:51:10 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: Cluster is now healthy
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 01:51:10 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:10 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v146: 259 pgs: 1 active+clean+laggy, 62 unknown, 196 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 8.0 KiB/s wr, 31 op/s
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:10 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 18 completed events
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:51:10 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:10 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:10 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:10 np0005603541 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.dsjekd for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:51:10 np0005603541 podman[94407]: 2026-01-31 06:51:10.820853351 +0000 UTC m=+0.045970367 container create eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:10 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5e21367d287cd62ddd766bfd5cc5702ff41cf190b0344b8183177a7a4668e3/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:10 np0005603541 podman[94407]: 2026-01-31 06:51:10.793752191 +0000 UTC m=+0.018869237 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 01:51:10 np0005603541 podman[94407]: 2026-01-31 06:51:10.899530936 +0000 UTC m=+0.124647972 container init eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:10 np0005603541 podman[94407]: 2026-01-31 06:51:10.90416697 +0000 UTC m=+0.129283986 container start eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:10 np0005603541 bash[94407]: eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358
Jan 31 01:51:10 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd[94422]: [NOTICE] 030/065110 (2) : New worker #1 (4) forked
Jan 31 01:51:10 np0005603541 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.dsjekd for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:51:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 01:51:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.wrxlmw on compute-2
Jan 31 01:51:11 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.wrxlmw on compute-2
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:11 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 53 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=11.679975510s) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 135.183013916s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:11 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 53 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=11.679975510s) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown pruub 135.183013916s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e9 new map
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T06:50:15.676838+0000#012modified#0112026-01-31T06:51:09.005628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] up:standby
Jan 31 01:51:11 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.12( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.16( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.15( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.13( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.c( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.b( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.9( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.d( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.a( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.8( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.2( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.3( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.5( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.7( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.18( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1a( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1b( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1c( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1e( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1d( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1f( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.11( empty local-lis/les=48/49 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: Deploying daemon haproxy.rgw.default.compute-2.wrxlmw on compute-2
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.16( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.13( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.c( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.5( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.7( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1d( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 active+clean+laggy, 124 unknown, 196 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 01:51:12 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 01:51:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 250 KiB/s rd, 3.0 KiB/s wr, 447 op/s
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 01:51:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531730652s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579269409s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.11( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.293203354s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340805054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531675339s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579269409s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.11( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.293141365s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340805054s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.14( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292660713s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340576172s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.5( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.284431458s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.332382202s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531228065s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579147339s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.14( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292615891s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340576172s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.5( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.284325600s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.332382202s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.16( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531235695s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579269409s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531126976s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579147339s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.16( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292520523s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340820312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.16( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292479515s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340820312s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.15( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292388916s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340805054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.15( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292346954s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340805054s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.16( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.531086922s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579269409s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530467033s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579162598s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.13( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530912399s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579650879s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530420303s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579162598s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.10( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292012215s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340896606s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530699730s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579635620s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.2( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.292005539s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340988159s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.10( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291938782s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340896606s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.2( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291967392s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340988159s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530671120s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579635620s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.13( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530882835s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579650879s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.17( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291793823s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340866089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.17( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291617393s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340866089s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.3( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291565895s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340911865s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.3( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291543961s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340911865s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291363716s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.340972900s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291410446s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341033936s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530152321s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579818726s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.530130386s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579818726s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291301727s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.340972900s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291224480s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341033936s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.a( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291827202s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341705322s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.9( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291152954s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341110229s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.a( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291795731s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341705322s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529727936s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579864502s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.d( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291107178s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341186523s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529687881s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579864502s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291215897s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341400146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.9( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290890694s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341110229s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291176796s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341400146s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529463768s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579849243s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529304504s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579772949s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290735245s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341201782s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529276848s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579772949s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290707588s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341201782s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.d( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.291013718s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341186523s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529219627s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579864502s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529433250s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579849243s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529092789s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579864502s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529271126s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.580017090s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.529093742s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.580017090s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.6( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290488243s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341491699s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.6( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290454865s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341491699s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.7( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528867722s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579940796s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.297818184s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.349014282s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.5( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528670311s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579879761s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.5( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528643608s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579879761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1b( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.297789574s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.349014282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.7( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528762817s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579940796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.4( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290228844s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.341583252s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.4( v 43'8 (0'0,43'8] local-lis/les=51/52 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.290202141s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.341583252s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528356552s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579910278s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528327942s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579910278s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528240204s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579895020s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.19( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.297424316s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.349212646s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528091431s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579925537s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.19( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.297387123s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.349212646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528068542s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579925537s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.18( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296279907s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.348251343s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.18( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296238899s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.348251343s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.528207779s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579895020s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1d( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527861595s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579986572s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1d( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527842522s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296020508s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.348281860s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527627945s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579940796s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527607918s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579940796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1f( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.295952797s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.348281860s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527514458s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active pruub 140.579986572s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.527491570s) [1] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.579986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296486855s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.349105835s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.1c( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296456337s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.349105835s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.12( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296424866s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active pruub 138.349197388s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:14 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 55 pg[8.12( v 43'8 (0'0,43'8] local-lis/les=51/52 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=11.296380043s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.349197388s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.kqakbv on compute-0
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.kqakbv on compute-0
Jan 31 01:51:15 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 5b449515-046c-46f9-9fd9-2f9d9804eb52 (Global Recovery Event) in 15 seconds
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: Deploying daemon keepalived.rgw.default.compute-0.kqakbv on compute-0
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 01:51:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.13( v 47'48 (0'0,47'48] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.14( v 54'51 lc 47'43 (0'0,54'51] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=54'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.15( v 54'51 lc 47'19 (0'0,54'51] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=54'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.8( v 47'48 (0'0,47'48] local-lis/les=55/56 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.2( v 47'48 (0'0,47'48] local-lis/les=55/56 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.18( v 47'48 (0'0,47'48] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.5( v 47'48 (0'0,47'48] local-lis/les=55/56 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.1b( v 47'48 (0'0,47'48] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:15 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 56 pg[10.19( v 47'48 (0'0,47'48] local-lis/les=55/56 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 3.1 KiB/s wr, 459 op/s
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 01:51:16 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 01:51:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:17 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 220 KiB/s rd, 2.7 KiB/s wr, 393 op/s; 145 B/s, 0 objects/s recovering
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.493067345 +0000 UTC m=+2.713223354 container create 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, name=keepalived, release=1793, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.47868031 +0000 UTC m=+2.698836339 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 01:51:18 np0005603541 systemd[1]: Started libpod-conmon-2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab.scope.
Jan 31 01:51:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.564492481 +0000 UTC m=+2.784648510 container init 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.570826788 +0000 UTC m=+2.790982807 container start 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.574499298 +0000 UTC m=+2.794655337 container attach 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 01:51:18 np0005603541 zealous_mendel[94671]: 0 0
Jan 31 01:51:18 np0005603541 systemd[1]: libpod-2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab.scope: Deactivated successfully.
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.57538832 +0000 UTC m=+2.795544329 container died 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, architecture=x86_64, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 01:51:18 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fe2d0dc057d58d1b70f5596b14b03f50459c9239b3a869e073547514d25cab2f-merged.mount: Deactivated successfully.
Jan 31 01:51:18 np0005603541 podman[94576]: 2026-01-31 06:51:18.616316122 +0000 UTC m=+2.836472131 container remove 2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab (image=quay.io/ceph/keepalived:2.2.4, name=zealous_mendel, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=)
Jan 31 01:51:18 np0005603541 systemd[1]: libpod-conmon-2e8fa1e05856568b575ee29521567d7ad6f060fc8c8f0d3cf815163cf90f98ab.scope: Deactivated successfully.
Jan 31 01:51:18 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 01:51:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 01:51:18 np0005603541 systemd[1]: Reloading.
Jan 31 01:51:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:51:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:51:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:19 np0005603541 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.kqakbv for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 01:51:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:19 np0005603541 podman[94815]: 2026-01-31 06:51:19.300693018 +0000 UTC m=+0.023245386 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 01:51:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s; 480 B/s, 1 objects/s recovering
Jan 31 01:51:20 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 19 completed events
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 01:51:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 31 01:51:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 01:51:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460589409s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'442 lcod 54'441 mlcod 54'441 active pruub 146.340881348s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460534096s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 146.340881348s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460606575s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 146.340972900s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460565567s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 146.340972900s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460639954s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'446 lcod 54'445 mlcod 54'445 active pruub 146.341201782s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460634232s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 146.341247559s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460589409s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 146.341201782s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460582733s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 146.341247559s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460714340s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=53'438 lcod 53'437 mlcod 53'437 active pruub 146.341751099s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460671425s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 146.341751099s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460197449s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=53'443 lcod 53'442 mlcod 53'442 active pruub 146.341522217s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.467973709s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 146.349349976s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.460135460s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 146.341522217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.467934608s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 146.349349976s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.466928482s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 146.348464966s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 58 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.466870308s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 146.348464966s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 01:51:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 01:51:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:20.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:21 np0005603541 podman[94815]: 2026-01-31 06:51:21.147489655 +0000 UTC m=+1.870042013 container create a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, distribution-scope=public, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 01:51:21 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90f5a6fb190be43c63d827d856834d1de7fe3b627914ef81ecd3688e4a4d943/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:21 np0005603541 podman[94815]: 2026-01-31 06:51:21.549804768 +0000 UTC m=+2.272357146 container init a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20)
Jan 31 01:51:21 np0005603541 podman[94815]: 2026-01-31 06:51:21.553970091 +0000 UTC m=+2.276522439 container start a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git)
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Starting VRRP child process, pid=4
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: Startup complete
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: (VI_0) Entering BACKUP STATE (init)
Jan 31 01:51:21 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:21 2026: VRRP_Script(check_backend) succeeded
Jan 31 01:51:21 np0005603541 bash[94815]: a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 01:51:21 np0005603541 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.kqakbv for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:51:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.rcppiv on compute-2
Jan 31 01:51:21 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.rcppiv on compute-2
Jan 31 01:51:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s; 446 B/s, 1 objects/s recovering
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: Deploying daemon keepalived.rgw.default.compute-2.rcppiv on compute-2
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 01:51:22 np0005603541 python3[94866]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 01:51:22 np0005603541 podman[94867]: 2026-01-31 06:51:22.52717966 +0000 UTC m=+0.020412152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:51:22 np0005603541 podman[94867]: 2026-01-31 06:51:22.635298541 +0000 UTC m=+0.128531023 container create 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 01:51:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=53'438 lcod 53'437 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=53'438 lcod 53'437 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321083069s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 146.341079712s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321039200s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 146.341079712s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321116447s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'465 lcod 54'464 mlcod 54'464 active pruub 146.341354370s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321046829s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 146.341354370s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321288109s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'453 lcod 54'452 mlcod 54'452 active pruub 146.341690063s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.321238518s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 146.341690063s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.328666687s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'449 lcod 54'448 mlcod 54'448 active pruub 146.349212646s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:22 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 60 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=11.328620911s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 146.349212646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:22 np0005603541 systemd[1]: Started libpod-conmon-09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118.scope.
Jan 31 01:51:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0eccad8d987c10f7b08e92902c33d40238963ed4d5cf889c2dbd8770d2ce70c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0eccad8d987c10f7b08e92902c33d40238963ed4d5cf889c2dbd8770d2ce70c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:22 np0005603541 podman[94867]: 2026-01-31 06:51:22.788264434 +0000 UTC m=+0.281496936 container init 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:51:22 np0005603541 podman[94867]: 2026-01-31 06:51:22.795178877 +0000 UTC m=+0.288411359 container start 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:22 np0005603541 podman[94867]: 2026-01-31 06:51:22.799304511 +0000 UTC m=+0.292537173 container attach 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:23.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:23 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 01:51:23 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 01:51:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 01:51:23 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 01:51:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 01:51:23 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'438 lcod 53'437 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 61 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 01:51:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 01:51:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869468689s) [2] async=[2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 53'437 active pruub 152.346649170s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:24 np0005603541 interesting_khorana[94881]: could not fetch user info: no user info saved
Jan 31 01:51:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:24 np0005603541 systemd[1]: libpod-09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118.scope: Deactivated successfully.
Jan 31 01:51:24 np0005603541 podman[94867]: 2026-01-31 06:51:24.418326567 +0000 UTC m=+1.911559049 container died 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 01:51:24 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c0eccad8d987c10f7b08e92902c33d40238963ed4d5cf889c2dbd8770d2ce70c-merged.mount: Deactivated successfully.
Jan 31 01:51:24 np0005603541 podman[94867]: 2026-01-31 06:51:24.787361306 +0000 UTC m=+2.280593788 container remove 09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118 (image=quay.io/ceph/ceph:v18, name=interesting_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 01:51:24 np0005603541 systemd[1]: libpod-conmon-09bc24e4e6d76088fb7b457d7ae785fa7e8bc427f58ca3941501bfa3928de118.scope: Deactivated successfully.
Jan 31 01:51:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:25 np0005603541 python3[95006]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:51:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 01:51:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:25 np0005603541 podman[95007]: 2026-01-31 06:51:25.17338481 +0000 UTC m=+0.069493592 container create c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:51:25 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:25 2026: (VI_0) Entering MASTER STATE
Jan 31 01:51:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 01:51:25 np0005603541 podman[95007]: 2026-01-31 06:51:25.122793782 +0000 UTC m=+0.018902584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 01:51:25 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 01:51:25 np0005603541 systemd[1]: Started libpod-conmon-c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9.scope.
Jan 31 01:51:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90441105789ae43b14e9948946e3ef2c8b64da4233c9b26cddff1fd2ceba3d5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90441105789ae43b14e9948946e3ef2c8b64da4233c9b26cddff1fd2ceba3d5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692562103s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 152.356018066s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849905014s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 54'448 active pruub 152.513366699s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692319870s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 152.355911255s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849797249s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 54'452 active pruub 152.513580322s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692113876s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 53'442 active pruub 152.355926514s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849512100s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 54'464 active pruub 152.513397217s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691993713s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 152.356033325s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691857338s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 54'445 active pruub 152.355850220s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691860199s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 152.356033325s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849142075s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 152.513412476s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691866875s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 54'441 active pruub 152.356140137s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:25 np0005603541 podman[95007]: 2026-01-31 06:51:25.460976648 +0000 UTC m=+0.357085480 container init c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:51:25 np0005603541 podman[95007]: 2026-01-31 06:51:25.465766598 +0000 UTC m=+0.361875400 container start c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:25 np0005603541 podman[95007]: 2026-01-31 06:51:25.581215012 +0000 UTC m=+0.477323814 container attach c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 01:51:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 01:51:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:26 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]: {
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "user_id": "openstack",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "display_name": "openstack",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "email": "",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "suspended": 0,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "max_buckets": 1000,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "subusers": [],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "keys": [
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        {
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:            "user": "openstack",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:            "access_key": "1WPQG66501R9AN3BIWK4",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:            "secret_key": "E55MoIlVQbP03EAAketntuFmYPnjAoPWH67xfzRQ"
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        }
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    ],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "swift_keys": [],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "caps": [],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "op_mask": "read, write, delete",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "default_placement": "",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "default_storage_class": "",
Jan 31 01:51:26 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "placement_tags": [],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "bucket_quota": {
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "enabled": false,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "check_on_raw": false,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_size": -1,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_size_kb": 0,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_objects": -1
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    },
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "user_quota": {
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "enabled": false,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "check_on_raw": false,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_size": -1,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_size_kb": 0,
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:        "max_objects": -1
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    },
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "temp_url_keys": [],
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "type": "rgw",
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]:    "mfa_ids": []
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]: }
Jan 31 01:51:26 np0005603541 gallant_hugle[95022]: 
Jan 31 01:51:26 np0005603541 ceph-mgr[74648]: [progress WARNING root] Starting Global Recovery Event,12 pgs not in active + clean state
Jan 31 01:51:26 np0005603541 systemd[1]: libpod-c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9.scope: Deactivated successfully.
Jan 31 01:51:26 np0005603541 podman[95007]: 2026-01-31 06:51:26.757465821 +0000 UTC m=+1.653574663 container died c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:51:26 np0005603541 systemd[1]: var-lib-containers-storage-overlay-90441105789ae43b14e9948946e3ef2c8b64da4233c9b26cddff1fd2ceba3d5b-merged.mount: Deactivated successfully.
Jan 31 01:51:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:26.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:27 np0005603541 podman[95007]: 2026-01-31 06:51:27.025668963 +0000 UTC m=+1.921777745 container remove c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9 (image=quay.io/ceph/ceph:v18, name=gallant_hugle, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:51:27 np0005603541 systemd[1]: libpod-conmon-c92454fbe1a7e6d9f2d214d5aac4a884659db4a07455e475edcba33959242ff9.scope: Deactivated successfully.
Jan 31 01:51:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:27 np0005603541 ceph-mgr[74648]: [progress INFO root] complete: finished ev be652d0a-bfeb-4d0b-ab13-6a49d03ee080 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 01:51:27 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event be652d0a-bfeb-4d0b-ab13-6a49d03ee080 (Updating ingress.rgw.default deployment (+4 -> 4)) in 20 seconds
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 01:51:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 477 B/s rd, 0 op/s
Jan 31 01:51:28 np0005603541 podman[95395]: 2026-01-31 06:51:28.434862921 +0000 UTC m=+0.059988945 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:28 np0005603541 podman[95416]: 2026-01-31 06:51:28.574744337 +0000 UTC m=+0.048820435 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 01:51:28 np0005603541 podman[95395]: 2026-01-31 06:51:28.587949587 +0000 UTC m=+0.213075601 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 podman[95551]: 2026-01-31 06:51:29.071730443 +0000 UTC m=+0.071222137 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:29 np0005603541 podman[95551]: 2026-01-31 06:51:29.119804587 +0000 UTC m=+0.119296291 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 01:51:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 01:51:29 np0005603541 podman[95617]: 2026-01-31 06:51:29.444483764 +0000 UTC m=+0.100028898 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20)
Jan 31 01:51:29 np0005603541 podman[95639]: 2026-01-31 06:51:29.510825077 +0000 UTC m=+0.051563784 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 01:51:29 np0005603541 podman[95617]: 2026-01-31 06:51:29.553980309 +0000 UTC m=+0.209525463 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, name=keepalived, version=2.2.4, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0e0b01ba-d82d-479c-abb0-4e6f3dcc27a5 does not exist
Jan 31 01:51:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 57c64932-ff6d-418b-ae22-ca8ee1bcfa9a does not exist
Jan 31 01:51:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev be3645ee-06d7-4554-9a5a-c331b5573c32 does not exist
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 998 B/s wr, 57 op/s; 293 B/s, 10 objects/s recovering
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 01:51:30 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.643806212 +0000 UTC m=+0.043288176 container create 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:51:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 01:51:30 np0005603541 systemd[1]: Started libpod-conmon-83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff.scope.
Jan 31 01:51:30 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.618293803 +0000 UTC m=+0.017775747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.746898196 +0000 UTC m=+0.146380120 container init 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.752358873 +0000 UTC m=+0.151840797 container start 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 31 01:51:30 np0005603541 eager_aryabhata[95941]: 167 167
Jan 31 01:51:30 np0005603541 systemd[1]: libpod-83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff.scope: Deactivated successfully.
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.768974959 +0000 UTC m=+0.168456913 container attach 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.769516452 +0000 UTC m=+0.168998416 container died 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 01:51:30 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6b687e453cde453dac907ae67c2d04dfc0f1b3475365e30ec2906b47f33fe828-merged.mount: Deactivated successfully.
Jan 31 01:51:30 np0005603541 podman[95924]: 2026-01-31 06:51:30.891477599 +0000 UTC m=+0.290959553 container remove 83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:51:30 np0005603541 systemd[1]: libpod-conmon-83cc7411e9f1e916eb07bd7bd73d1ccf1d1a74f21b9806f5a3f32675e60bc1ff.scope: Deactivated successfully.
Jan 31 01:51:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:31 np0005603541 podman[95967]: 2026-01-31 06:51:31.005542878 +0000 UTC m=+0.042187118 container create 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:51:31 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv[94833]: Sat Jan 31 06:51:31 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 31 01:51:31 np0005603541 systemd[1]: Started libpod-conmon-266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a.scope.
Jan 31 01:51:31 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:31 np0005603541 podman[95967]: 2026-01-31 06:51:30.982575892 +0000 UTC m=+0.019220142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:31 np0005603541 podman[95967]: 2026-01-31 06:51:31.098548759 +0000 UTC m=+0.135193029 container init 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:31 np0005603541 podman[95967]: 2026-01-31 06:51:31.102821696 +0000 UTC m=+0.139465936 container start 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:51:31 np0005603541 podman[95967]: 2026-01-31 06:51:31.106209671 +0000 UTC m=+0.142853921 container attach 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 01:51:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:31.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 33 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 01:51:31 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 20 completed events
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:31 np0005603541 ceph-mgr[74648]: [progress INFO root] Completed event 5063ef16-2dfa-411e-8f43-81ed4f921199 (Global Recovery Event) in 5 seconds
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: Health check failed: 1 slow ops, oldest one blocked for 33 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.914024353s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 52'437 active pruub 154.341018677s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913933754s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 54'449 active pruub 154.341217041s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913835526s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 53'452 active pruub 154.341857910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921279907s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 154.349487305s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:32 np0005603541 naughty_spence[95983]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:51:32 np0005603541 naughty_spence[95983]: --> relative data size: 1.0
Jan 31 01:51:32 np0005603541 naughty_spence[95983]: --> All data devices are unavailable
Jan 31 01:51:32 np0005603541 systemd[1]: libpod-266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a.scope: Deactivated successfully.
Jan 31 01:51:32 np0005603541 podman[95999]: 2026-01-31 06:51:32.221453511 +0000 UTC m=+0.025636783 container died 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:51:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-297f084a6a4035228087ba30304815484439009a46e7b41d8826f95c08a31ee1-merged.mount: Deactivated successfully.
Jan 31 01:51:32 np0005603541 podman[95999]: 2026-01-31 06:51:32.274358978 +0000 UTC m=+0.078542230 container remove 266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_spence, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:32 np0005603541 systemd[1]: libpod-conmon-266d0f7a2266168b20118189b08454e2454581898975091e09d2d34834a5958a.scope: Deactivated successfully.
Jan 31 01:51:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 870 B/s wr, 50 op/s; 255 B/s, 8 objects/s recovering
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.800022612 +0000 UTC m=+0.045894331 container create bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:51:32 np0005603541 systemd[1]: Started libpod-conmon-bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7.scope.
Jan 31 01:51:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.775008376 +0000 UTC m=+0.020880125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.873048793 +0000 UTC m=+0.118920512 container init bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.880746945 +0000 UTC m=+0.126618664 container start bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:51:32 np0005603541 festive_carson[96173]: 167 167
Jan 31 01:51:32 np0005603541 systemd[1]: libpod-bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7.scope: Deactivated successfully.
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.888184731 +0000 UTC m=+0.134056450 container attach bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.890058829 +0000 UTC m=+0.135930548 container died bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-3cd9ea8e8d1fd7188566621d781a88c2372c7f3274e9201affc62babc022e558-merged.mount: Deactivated successfully.
Jan 31 01:51:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:32.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:32 np0005603541 podman[96156]: 2026-01-31 06:51:32.967988271 +0000 UTC m=+0.213859980 container remove bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:51:32 np0005603541 systemd[1]: libpod-conmon-bd955d8fe3716af92a8484df40e8e35d8ff29f12abf5fe6b33958c3c835516d7.scope: Deactivated successfully.
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.126821393 +0000 UTC m=+0.077162125 container create dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.066750577 +0000 UTC m=+0.017091379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:33.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:33 np0005603541 systemd[1]: Started libpod-conmon-dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87.scope.
Jan 31 01:51:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485634b40ce7f375c38ca72d4bc3d0f37a5f74af03368e1011f28f94acc3eaeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485634b40ce7f375c38ca72d4bc3d0f37a5f74af03368e1011f28f94acc3eaeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485634b40ce7f375c38ca72d4bc3d0f37a5f74af03368e1011f28f94acc3eaeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485634b40ce7f375c38ca72d4bc3d0f37a5f74af03368e1011f28f94acc3eaeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.280804632 +0000 UTC m=+0.231145454 container init dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.28831354 +0000 UTC m=+0.238654322 container start dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.294049713 +0000 UTC m=+0.244390485 container attach dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 01:51:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 01:51:33 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 01:51:33 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 01:51:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]: {
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:    "0": [
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:        {
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "devices": [
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "/dev/loop3"
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            ],
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "lv_name": "ceph_lv0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "lv_size": "7511998464",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "name": "ceph_lv0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "tags": {
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.cluster_name": "ceph",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.crush_device_class": "",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.encrypted": "0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.osd_id": "0",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.type": "block",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:                "ceph.vdo": "0"
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            },
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "type": "block",
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:            "vg_name": "ceph_vg0"
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:        }
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]:    ]
Jan 31 01:51:33 np0005603541 intelligent_hofstadter[96214]: }
Jan 31 01:51:33 np0005603541 systemd[1]: libpod-dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87.scope: Deactivated successfully.
Jan 31 01:51:33 np0005603541 podman[96197]: 2026-01-31 06:51:33.988056987 +0000 UTC m=+0.938397739 container died dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:51:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-485634b40ce7f375c38ca72d4bc3d0f37a5f74af03368e1011f28f94acc3eaeb-merged.mount: Deactivated successfully.
Jan 31 01:51:34 np0005603541 podman[96197]: 2026-01-31 06:51:34.038819079 +0000 UTC m=+0.989159821 container remove dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 01:51:34 np0005603541 systemd[1]: libpod-conmon-dc2131244ad144459533adbe84ffbf34afac7cee43b155290a13ee4fd07efc87.scope: Deactivated successfully.
Jan 31 01:51:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 1 active+clean+laggy, 4 unknown, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 341 B/s wr, 8 op/s; 300 B/s, 10 objects/s recovering
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.538676956 +0000 UTC m=+0.034755452 container create 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 01:51:34 np0005603541 systemd[1]: Started libpod-conmon-148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb.scope.
Jan 31 01:51:34 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.607009049 +0000 UTC m=+0.103087555 container init 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.612428495 +0000 UTC m=+0.108506981 container start 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.615883531 +0000 UTC m=+0.111962017 container attach 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:34 np0005603541 thirsty_northcutt[96391]: 167 167
Jan 31 01:51:34 np0005603541 systemd[1]: libpod-148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb.scope: Deactivated successfully.
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.521049515 +0000 UTC m=+0.017128021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.618420955 +0000 UTC m=+0.114499451 container died 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:51:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-dd2f769156f19888c1f20486bd1d49d01ec7ddc0e80ec6633dd6a01987990bbf-merged.mount: Deactivated successfully.
Jan 31 01:51:34 np0005603541 podman[96375]: 2026-01-31 06:51:34.661893305 +0000 UTC m=+0.157971791 container remove 148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 01:51:34 np0005603541 systemd[1]: libpod-conmon-148d0d8992c6a0558b60c4757fce0fbb2ffee218dee5fb05c7cc3a387dd65ebb.scope: Deactivated successfully.
Jan 31 01:51:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 01:51:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 01:51:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317691803s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 52'437 active pruub 162.440124512s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317581177s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 54'449 active pruub 162.440155029s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317067146s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 53'452 active pruub 162.440216064s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316988945s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 162.440231323s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:34 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:34 np0005603541 podman[96414]: 2026-01-31 06:51:34.801371201 +0000 UTC m=+0.048831865 container create 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:34 np0005603541 systemd[1]: Started libpod-conmon-5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9.scope.
Jan 31 01:51:34 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28106ba67831af8099e2d982c51baca40ad53120cdf57237d559c57152847856/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28106ba67831af8099e2d982c51baca40ad53120cdf57237d559c57152847856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28106ba67831af8099e2d982c51baca40ad53120cdf57237d559c57152847856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28106ba67831af8099e2d982c51baca40ad53120cdf57237d559c57152847856/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:34 np0005603541 podman[96414]: 2026-01-31 06:51:34.876164705 +0000 UTC m=+0.123625369 container init 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:34 np0005603541 podman[96414]: 2026-01-31 06:51:34.78383126 +0000 UTC m=+0.031291924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:34 np0005603541 podman[96414]: 2026-01-31 06:51:34.887184321 +0000 UTC m=+0.134644965 container start 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:34 np0005603541 podman[96414]: 2026-01-31 06:51:34.89034541 +0000 UTC m=+0.137806084 container attach 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 01:51:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:34.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:35.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]: {
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:        "osd_id": 0,
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:        "type": "bluestore"
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]:    }
Jan 31 01:51:35 np0005603541 kind_lamarr[96430]: }
Jan 31 01:51:35 np0005603541 systemd[1]: libpod-5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9.scope: Deactivated successfully.
Jan 31 01:51:35 np0005603541 podman[96414]: 2026-01-31 06:51:35.670860612 +0000 UTC m=+0.918321286 container died 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:35 np0005603541 systemd[1]: var-lib-containers-storage-overlay-28106ba67831af8099e2d982c51baca40ad53120cdf57237d559c57152847856-merged.mount: Deactivated successfully.
Jan 31 01:51:35 np0005603541 podman[96414]: 2026-01-31 06:51:35.730787774 +0000 UTC m=+0.978248418 container remove 5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:51:35 np0005603541 systemd[1]: libpod-conmon-5919e0589d7abd39ebbb4cfbf407aa9c9b658b2146ef66a630d6f03102c675a9.scope: Deactivated successfully.
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:35 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 57eb176c-c6ac-4fee-bde0-415418884334 does not exist
Jan 31 01:51:35 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 478477b8-c26b-49ef-8484-88d3754a2247 does not exist
Jan 31 01:51:35 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0ee3d5d3-ece6-4b8c-982b-74e9de814373 does not exist
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:35 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:51:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 01:51:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 1 active+clean+laggy, 4 unknown, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.513167771 +0000 UTC m=+0.035115491 container create 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:36 np0005603541 systemd[1]: Started libpod-conmon-0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277.scope.
Jan 31 01:51:36 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.581273199 +0000 UTC m=+0.103220979 container init 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.587073614 +0000 UTC m=+0.109021354 container start 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:51:36 np0005603541 naughty_satoshi[96646]: 167 167
Jan 31 01:51:36 np0005603541 systemd[1]: libpod-0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277.scope: Deactivated successfully.
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.59131757 +0000 UTC m=+0.113265370 container attach 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.592294825 +0000 UTC m=+0.114242575 container died 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.498264639 +0000 UTC m=+0.020212379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:36 np0005603541 systemd[1]: var-lib-containers-storage-overlay-825013c3c6b2741f91dc54e2a935c3445cd19a0355ed9968b61a0d72371839b2-merged.mount: Deactivated successfully.
Jan 31 01:51:36 np0005603541 podman[96629]: 2026-01-31 06:51:36.643405745 +0000 UTC m=+0.165353485 container remove 0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:51:36 np0005603541 systemd[1]: libpod-conmon-0bc6f862f53e73af785960fe2a818327c360587562e038b992025cb5b3c8a277.scope: Deactivated successfully.
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: [progress INFO root] Writing back 21 completed events
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.gghdjs (monmap changed)...
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.gghdjs (monmap changed)...
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:51:36 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:51:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:36.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.156719761 +0000 UTC m=+0.033591953 container create c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:51:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:37 np0005603541 systemd[1]: Started libpod-conmon-c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802.scope.
Jan 31 01:51:37 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.21018307 +0000 UTC m=+0.087055252 container init c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.213605957 +0000 UTC m=+0.090478109 container start c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:37 np0005603541 recursing_raman[96799]: 167 167
Jan 31 01:51:37 np0005603541 systemd[1]: libpod-c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802.scope: Deactivated successfully.
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.216429227 +0000 UTC m=+0.093301379 container attach c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.217011522 +0000 UTC m=+0.093883734 container died c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.14112904 +0000 UTC m=+0.018001222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:37 np0005603541 systemd[1]: var-lib-containers-storage-overlay-83938ea7502d47a27ca666ac38eacb2f82e8e8ca9a65a3f39cdb09b7add2ef18-merged.mount: Deactivated successfully.
Jan 31 01:51:37 np0005603541 podman[96783]: 2026-01-31 06:51:37.257129037 +0000 UTC m=+0.134001229 container remove c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_raman, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:37 np0005603541 systemd[1]: libpod-conmon-c34e594d9d1e54da6da70159cf79af44e95d5f3b3c5b79b64bdfabf9b72e3802.scope: Deactivated successfully.
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:37 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 01:51:37 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:37 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 01:51:37 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: Reconfiguring mgr.compute-0.gghdjs (monmap changed)...
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.871104684 +0000 UTC m=+0.049563952 container create 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 01:51:37 np0005603541 systemd[1]: Started libpod-conmon-7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe.scope.
Jan 31 01:51:37 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.930303719 +0000 UTC m=+0.108762997 container init 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.935015147 +0000 UTC m=+0.113474415 container start 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:37 np0005603541 sharp_chatterjee[96952]: 167 167
Jan 31 01:51:37 np0005603541 systemd[1]: libpod-7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe.scope: Deactivated successfully.
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.940406482 +0000 UTC m=+0.118865750 container attach 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:37 np0005603541 conmon[96952]: conmon 7cb1f8fc3ca573dbe9e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe.scope/container/memory.events
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.941327764 +0000 UTC m=+0.119787032 container died 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.847419411 +0000 UTC m=+0.025878699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:37 np0005603541 systemd[1]: var-lib-containers-storage-overlay-411ca6479bc12cd62b5ec7cc716f4a4bd3ad34e786a7c7b7fde51438fc1a1bcb-merged.mount: Deactivated successfully.
Jan 31 01:51:37 np0005603541 podman[96936]: 2026-01-31 06:51:37.984417424 +0000 UTC m=+0.162876702 container remove 7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:51:37 np0005603541 systemd[1]: libpod-conmon-7cb1f8fc3ca573dbe9e1117f9151debb1e86ea862620db560e65b907f56116fe.scope: Deactivated successfully.
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 742 B/s wr, 54 op/s; 39 B/s, 3 objects/s recovering
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.50493431 +0000 UTC m=+0.040703541 container create 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 01:51:38 np0005603541 systemd[1]: Started libpod-conmon-3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908.scope.
Jan 31 01:51:38 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.553900647 +0000 UTC m=+0.089669918 container init 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.557787315 +0000 UTC m=+0.093556556 container start 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 01:51:38 np0005603541 focused_black[97104]: 167 167
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.56078505 +0000 UTC m=+0.096554291 container attach 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.561027136 +0000 UTC m=+0.096796377 container died 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:51:38 np0005603541 systemd[1]: libpod-3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908.scope: Deactivated successfully.
Jan 31 01:51:38 np0005603541 systemd[1]: var-lib-containers-storage-overlay-46b315e333941f58dab8831a5ddefb65f14a9f31b3358f0b58eab8b7f566f43d-merged.mount: Deactivated successfully.
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.488654102 +0000 UTC m=+0.024423373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:38 np0005603541 podman[97088]: 2026-01-31 06:51:38.594904005 +0000 UTC m=+0.130673256 container remove 3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:51:38 np0005603541 systemd[1]: libpod-conmon-3285f371be70780e84517cc25c83badd9b30692a8ad997d1d57a1be480d7a908.scope: Deactivated successfully.
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 01:51:38 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 01:51:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:38.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: Reconfiguring osd.0 (monmap changed)...
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: Reconfiguring daemon osd.0 on compute-0
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 38 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:39 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Jan 31 01:51:39 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Jan 31 01:51:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 01:51:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 01:51:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975888252s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 162.341400146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975490570s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 52'439 active pruub 162.341995239s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 38 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 01:51:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 682 B/s wr, 50 op/s; 36 B/s, 3 objects/s recovering
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 01:51:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:40.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:41 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 01:51:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:41 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950375557s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 52'444 active pruub 170.341415405s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957394600s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 170.348876953s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:42 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 682 B/s wr, 50 op/s; 36 B/s, 3 objects/s recovering
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: Reconfiguring osd.1 (monmap changed)...
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: Reconfiguring daemon osd.1 on compute-1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 01:51:42 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 01:51:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193369865s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 170.597778320s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936949730s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 54'465 active pruub 170.341690063s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936934471s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 53'444 active pruub 170.342041016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192858696s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 52'439 active pruub 170.598068237s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 31 01:51:43 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 01:51:43 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:43 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 01:51:43 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:44 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.iujpur (monmap changed)...
Jan 31 01:51:44 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.iujpur (monmap changed)...
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 43 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:44 np0005603541 ceph-mgr[74648]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:51:44 np0005603541 ceph-mgr[74648]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: Reconfiguring mgr.compute-2.iujpur (monmap changed)...
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 43 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: Reconfiguring daemon mgr.compute-2.iujpur on compute-2
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:45.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911890984s) [2] async=[2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 52'444 active pruub 172.438339233s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911568642s) [2] async=[2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 172.438385010s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:51:45 np0005603541 podman[97305]: 2026-01-31 06:51:45.357236294 +0000 UTC m=+0.065416501 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:45 np0005603541 podman[97326]: 2026-01-31 06:51:45.515329416 +0000 UTC m=+0.051666886 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:51:45 np0005603541 podman[97305]: 2026-01-31 06:51:45.524087976 +0000 UTC m=+0.232268253 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:51:45 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 podman[97457]: 2026-01-31 06:51:46.040925709 +0000 UTC m=+0.077898243 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 01:51:46 np0005603541 podman[97479]: 2026-01-31 06:51:46.103820015 +0000 UTC m=+0.049395199 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:46 np0005603541 podman[97457]: 2026-01-31 06:51:46.118474993 +0000 UTC m=+0.155447537 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 B/s, 2 objects/s recovering
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915776253s) [1] async=[1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 54'465 active pruub 173.544143677s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907569885s) [1] async=[1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 53'444 active pruub 173.536026001s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:46 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:51:46 np0005603541 podman[97522]: 2026-01-31 06:51:46.313420319 +0000 UTC m=+0.060846357 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 podman[97522]: 2026-01-31 06:51:46.354425606 +0000 UTC m=+0.101851634 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, release=1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev a2c9616b-a479-4645-9a77-68898d6151fb does not exist
Jan 31 01:51:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 356b2543-1aac-4502-a8fd-a6487b1238dc does not exist
Jan 31 01:51:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 275cb5d4-d665-45fc-8b3c-ff349f0424cf does not exist
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:51:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:51:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:47 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Jan 31 01:51:47 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Jan 31 01:51:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:47.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 01:51:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.380066061 +0000 UTC m=+0.034413624 container create 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:51:47 np0005603541 systemd[1]: Started libpod-conmon-5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400.scope.
Jan 31 01:51:47 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.456740752 +0000 UTC m=+0.111088345 container init 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.365121637 +0000 UTC m=+0.019469210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.464011455 +0000 UTC m=+0.118359018 container start 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:47 np0005603541 blissful_kapitsa[97712]: 167 167
Jan 31 01:51:47 np0005603541 systemd[1]: libpod-5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400.scope: Deactivated successfully.
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.467374419 +0000 UTC m=+0.121722002 container attach 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 01:51:47 np0005603541 conmon[97712]: conmon 5d72df7d10786bf342e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400.scope/container/memory.events
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.467846721 +0000 UTC m=+0.122194284 container died 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:51:47 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a85300f261f9b4e7a93f51043ec6c2d74aaba43eba00db121a26d789e3d674d4-merged.mount: Deactivated successfully.
Jan 31 01:51:47 np0005603541 podman[97695]: 2026-01-31 06:51:47.501249968 +0000 UTC m=+0.155597531 container remove 5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 01:51:47 np0005603541 systemd[1]: libpod-conmon-5d72df7d10786bf342e0bb55102461261bc8a9c000dbc2e30da994dc1dae7400.scope: Deactivated successfully.
Jan 31 01:51:47 np0005603541 podman[97735]: 2026-01-31 06:51:47.607749147 +0000 UTC m=+0.041710946 container create bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:47 np0005603541 systemd[1]: Started libpod-conmon-bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4.scope.
Jan 31 01:51:47 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:47 np0005603541 podman[97735]: 2026-01-31 06:51:47.681005054 +0000 UTC m=+0.114966883 container init bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:47 np0005603541 podman[97735]: 2026-01-31 06:51:47.589913501 +0000 UTC m=+0.023875320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:47 np0005603541 podman[97735]: 2026-01-31 06:51:47.691913936 +0000 UTC m=+0.125875735 container start bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 01:51:47 np0005603541 podman[97735]: 2026-01-31 06:51:47.702140694 +0000 UTC m=+0.136102493 container attach bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:48 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 31 01:51:48 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 B/s, 3 objects/s recovering
Jan 31 01:51:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:51:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:51:48 np0005603541 friendly_yonath[97753]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:51:48 np0005603541 friendly_yonath[97753]: --> relative data size: 1.0
Jan 31 01:51:48 np0005603541 friendly_yonath[97753]: --> All data devices are unavailable
Jan 31 01:51:48 np0005603541 systemd[1]: libpod-bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4.scope: Deactivated successfully.
Jan 31 01:51:48 np0005603541 podman[97735]: 2026-01-31 06:51:48.47367555 +0000 UTC m=+0.907637369 container died bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4a78fd5d535c7da4b3fd5d5714c2ee597e097792f0561e8098018cf85bbecaf1-merged.mount: Deactivated successfully.
Jan 31 01:51:48 np0005603541 podman[97735]: 2026-01-31 06:51:48.525845427 +0000 UTC m=+0.959807226 container remove bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:51:48 np0005603541 systemd[1]: libpod-conmon-bca2989fde429dd962e6962635079df3711818a721e66e98494d2860c90eebe4.scope: Deactivated successfully.
Jan 31 01:51:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:48.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:48 np0005603541 podman[97969]: 2026-01-31 06:51:48.983971639 +0000 UTC m=+0.040897516 container create 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:51:49 np0005603541 systemd[1]: Started libpod-conmon-1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a.scope.
Jan 31 01:51:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:51:49
Jan 31 01:51:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:51:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Some PGs (0.012461) are inactive; try again later
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:49.038943276 +0000 UTC m=+0.095869183 container init 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:49.043321227 +0000 UTC m=+0.100247104 container start 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:49 np0005603541 magical_hamilton[97987]: 167 167
Jan 31 01:51:49 np0005603541 systemd[1]: libpod-1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a.scope: Deactivated successfully.
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:49.04708707 +0000 UTC m=+0.104012947 container attach 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:49.047551273 +0000 UTC m=+0.104477150 container died 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:48.966672536 +0000 UTC m=+0.023598443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ef53ff81103c88516995623fc7c94fb32c7065f6384e36d81d27e2974d7261d0-merged.mount: Deactivated successfully.
Jan 31 01:51:49 np0005603541 podman[97969]: 2026-01-31 06:51:49.082241771 +0000 UTC m=+0.139167648 container remove 1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:49 np0005603541 systemd[1]: libpod-conmon-1c068157607d6874d63418d294486f7c5cd43ea185cd6675481624bfafd88a1a.scope: Deactivated successfully.
Jan 31 01:51:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:49 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 31 01:51:49 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 31 01:51:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:49.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:49 np0005603541 podman[98011]: 2026-01-31 06:51:49.197395448 +0000 UTC m=+0.032504016 container create 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:51:49 np0005603541 systemd[1]: Started libpod-conmon-64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648.scope.
Jan 31 01:51:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6514b7fa2d88a2051daec577d950de90e8a0e328843f9a892e8d04d41809d88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6514b7fa2d88a2051daec577d950de90e8a0e328843f9a892e8d04d41809d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6514b7fa2d88a2051daec577d950de90e8a0e328843f9a892e8d04d41809d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6514b7fa2d88a2051daec577d950de90e8a0e328843f9a892e8d04d41809d88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:49 np0005603541 podman[98011]: 2026-01-31 06:51:49.260231073 +0000 UTC m=+0.095339641 container init 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:49 np0005603541 podman[98011]: 2026-01-31 06:51:49.271507296 +0000 UTC m=+0.106615854 container start 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 01:51:49 np0005603541 podman[98011]: 2026-01-31 06:51:49.275966937 +0000 UTC m=+0.111075495 container attach 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:51:49 np0005603541 podman[98011]: 2026-01-31 06:51:49.182668239 +0000 UTC m=+0.017776817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:49 np0005603541 systemd-logind[817]: New session 34 of user zuul.
Jan 31 01:51:49 np0005603541 systemd[1]: Started Session 34 of User zuul.
Jan 31 01:51:49 np0005603541 silly_elgamal[98027]: {
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:    "0": [
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:        {
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "devices": [
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "/dev/loop3"
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            ],
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "lv_name": "ceph_lv0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "lv_size": "7511998464",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "name": "ceph_lv0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "tags": {
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.cluster_name": "ceph",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.crush_device_class": "",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.encrypted": "0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.osd_id": "0",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.type": "block",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:                "ceph.vdo": "0"
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            },
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "type": "block",
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:            "vg_name": "ceph_vg0"
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:        }
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]:    ]
Jan 31 01:51:50 np0005603541 silly_elgamal[98027]: }
Jan 31 01:51:50 np0005603541 systemd[1]: libpod-64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648.scope: Deactivated successfully.
Jan 31 01:51:50 np0005603541 podman[98093]: 2026-01-31 06:51:50.069391542 +0000 UTC m=+0.025207193 container died 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e6514b7fa2d88a2051daec577d950de90e8a0e328843f9a892e8d04d41809d88-merged.mount: Deactivated successfully.
Jan 31 01:51:50 np0005603541 podman[98093]: 2026-01-31 06:51:50.11681698 +0000 UTC m=+0.072632591 container remove 64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:51:50 np0005603541 systemd[1]: libpod-conmon-64c99caa61ece790e351c4d4f85d13f584c470be4f5fda41a4ac7ab119748648.scope: Deactivated successfully.
Jan 31 01:51:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 91 B/s, 4 objects/s recovering
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 01:51:50 np0005603541 python3.9[98278]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.611436907 +0000 UTC m=+0.032380353 container create f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:50 np0005603541 systemd[1]: Started libpod-conmon-f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56.scope.
Jan 31 01:51:50 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.683186385 +0000 UTC m=+0.104129851 container init f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.596748839 +0000 UTC m=+0.017692305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.69853194 +0000 UTC m=+0.119475406 container start f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.701924755 +0000 UTC m=+0.122868231 container attach f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:50 np0005603541 happy_almeida[98371]: 167 167
Jan 31 01:51:50 np0005603541 systemd[1]: libpod-f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56.scope: Deactivated successfully.
Jan 31 01:51:50 np0005603541 conmon[98371]: conmon f7a6af862c932cdb1e73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56.scope/container/memory.events
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.703800942 +0000 UTC m=+0.124744408 container died f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:51:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-09f8ffe0530c10688b3312dd62b9186f84bde7408492b7e05ca34aa66dae4010-merged.mount: Deactivated successfully.
Jan 31 01:51:50 np0005603541 podman[98346]: 2026-01-31 06:51:50.733726022 +0000 UTC m=+0.154669498 container remove f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:51:50 np0005603541 systemd[1]: libpod-conmon-f7a6af862c932cdb1e7373066ff33399857362e70c2a14811a0de8b45cb17f56.scope: Deactivated successfully.
Jan 31 01:51:50 np0005603541 podman[98396]: 2026-01-31 06:51:50.858605541 +0000 UTC m=+0.037669265 container create b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 01:51:50 np0005603541 systemd[1]: Started libpod-conmon-b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb.scope.
Jan 31 01:51:50 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:51:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af685b18fdf82036f507256054654dd57dd625556fcda3b7a5c36c85ee00e129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af685b18fdf82036f507256054654dd57dd625556fcda3b7a5c36c85ee00e129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af685b18fdf82036f507256054654dd57dd625556fcda3b7a5c36c85ee00e129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:50 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af685b18fdf82036f507256054654dd57dd625556fcda3b7a5c36c85ee00e129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:51:50 np0005603541 podman[98396]: 2026-01-31 06:51:50.926928784 +0000 UTC m=+0.105992518 container init b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:51:50 np0005603541 podman[98396]: 2026-01-31 06:51:50.931535789 +0000 UTC m=+0.110599513 container start b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:51:50 np0005603541 podman[98396]: 2026-01-31 06:51:50.93436037 +0000 UTC m=+0.113424124 container attach b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:51:50 np0005603541 podman[98396]: 2026-01-31 06:51:50.842294292 +0000 UTC m=+0.021358066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:51:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:50.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:51 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 31 01:51:51 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 31 01:51:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 01:51:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:51 np0005603541 confident_bartik[98433]: {
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:        "osd_id": 0,
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:        "type": "bluestore"
Jan 31 01:51:51 np0005603541 confident_bartik[98433]:    }
Jan 31 01:51:51 np0005603541 confident_bartik[98433]: }
Jan 31 01:51:51 np0005603541 systemd[1]: libpod-b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb.scope: Deactivated successfully.
Jan 31 01:51:51 np0005603541 podman[98396]: 2026-01-31 06:51:51.693961518 +0000 UTC m=+0.873025252 container died b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:51:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay-af685b18fdf82036f507256054654dd57dd625556fcda3b7a5c36c85ee00e129-merged.mount: Deactivated successfully.
Jan 31 01:51:52 np0005603541 python3.9[98649]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:51:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 31 01:51:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 31 01:51:52 np0005603541 podman[98396]: 2026-01-31 06:51:52.160178942 +0000 UTC m=+1.339242666 container remove b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:51:52 np0005603541 systemd[1]: libpod-conmon-b9d2ff19484b7a6fea551d62efa8ec1897926832ba7b5010decdaea90a11b3fb.scope: Deactivated successfully.
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:51:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 91 B/s, 4 objects/s recovering
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:52 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 19d482cd-27c6-4de4-bb5f-a41dfde279bd does not exist
Jan 31 01:51:52 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 74b3dbb6-f676-4314-b325-4f6bc042cb81 does not exist
Jan 31 01:51:52 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 11ed049e-ba4d-466a-964a-37dad901b249 does not exist
Jan 31 01:51:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 01:51:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 01:51:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 01:51:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 53 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 78 B/s, 3 objects/s recovering
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 31 01:51:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:51:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:51:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:54.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 53 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 01:51:55 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 01:51:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 01:51:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 31 01:51:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 01:51:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:51:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:56.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 01:51:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 01:51:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 01:51:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 31 01:51:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 01:51:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:51:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:58.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 58 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:51:59 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 31 01:51:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:51:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:51:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:51:59 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 58 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 01:51:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 01:51:59 np0005603541 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 01:51:59 np0005603541 systemd[1]: session-34.scope: Consumed 7.590s CPU time.
Jan 31 01:51:59 np0005603541 systemd-logind[817]: Session 34 logged out. Waiting for processes to exit.
Jan 31 01:51:59 np0005603541 systemd-logind[817]: Removed session 34.
Jan 31 01:52:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Jan 31 01:52:00 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Jan 31 01:52:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 01:52:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 01:52:00 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153639793s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 52'435 active pruub 186.333679199s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:00 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:52:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:52:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 01:52:01 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 01:52:01 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:01 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 01:52:02 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387310028s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 186.341903687s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:02 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 01:52:02 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:02.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 01:52:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 01:52:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 01:52:03 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946665764s) [1] async=[1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 52'435 active pruub 190.982055664s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:03 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:03 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 01:52:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 63 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 01:52:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 01:52:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 1 active+remapped, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 01:52:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338972092s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 53'446 active pruub 186.349975586s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:04 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 63 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 01:52:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 01:52:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:04.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 01:52:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 01:52:05 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 01:52:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.847044945s) [1] async=[1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 193.021102905s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:05 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 active+remapped, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 01:52:06 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 01:52:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 01:52:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:06.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 01:52:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 01:52:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 01:52:07 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002256393s) [1] async=[1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 53'446 active pruub 195.172988892s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:07 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 4 objects/s recovering
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 01:52:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:08.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 01:52:08 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 01:52:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 68 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:52:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.361378652521869e-06 of space, bias 1.0, pg target 0.0019084135957565607 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:52:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:10 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 68 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:10 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 01:52:10 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 01:52:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 3 objects/s recovering
Jan 31 01:52:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 31 01:52:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 01:52:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:10.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 01:52:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:11.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 01:52:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 60 B/s, 3 objects/s recovering
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 01:52:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 01:52:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:12.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 01:52:13 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 01:52:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 01:52:13 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 73 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 01:52:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 01:52:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 01:52:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 active+clean+laggy, 2 unknown, 318 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 73 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:14 np0005603541 systemd-logind[817]: New session 35 of user zuul.
Jan 31 01:52:14 np0005603541 systemd[1]: Started Session 35 of User zuul.
Jan 31 01:52:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:14.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 01:52:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 01:52:15 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 01:52:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:15 np0005603541 python3.9[98972]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 01:52:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 01:52:16 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 01:52:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 01:52:16 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 01:52:16 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 01:52:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 1 active+clean+laggy, 2 unknown, 318 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:16 np0005603541 python3.9[99147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:52:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:16.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:17 np0005603541 python3.9[99303]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:52:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 225 B/s wr, 14 op/s; 48 B/s, 1 objects/s recovering
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:18 np0005603541 python3.9[99457]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:52:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:52:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:18.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:52:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 78 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 78 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:19 np0005603541 python3.9[99611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:52:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 331 B/s wr, 18 op/s; 71 B/s, 2 objects/s recovering
Jan 31 01:52:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 31 01:52:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 01:52:20 np0005603541 python3.9[99764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:52:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:20.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:21 np0005603541 python3.9[99914]: ansible-ansible.builtin.service_facts Invoked
Jan 31 01:52:21 np0005603541 network[99931]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 01:52:21 np0005603541 network[99932]: 'network-scripts' will be removed from distribution in near future.
Jan 31 01:52:21 np0005603541 network[99933]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 01:52:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 01:52:21 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 01:52:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 01:52:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 01:52:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 285 B/s wr, 16 op/s; 61 B/s, 2 objects/s recovering
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:22 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 01:52:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:23 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 31 01:52:23 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 31 01:52:23 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 01:52:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 13 op/s; 54 B/s, 1 objects/s recovering
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 01:52:24 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:24 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 01:52:24 np0005603541 python3.9[100195]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:52:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:24.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 01:52:25 np0005603541 python3.9[100345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:52:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 01:52:25 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 01:52:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:25 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 01:52:26 np0005603541 python3.9[100500]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 01:52:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 01:52:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 01:52:27 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 01:52:27 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 01:52:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:27 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:27 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 01:52:27 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 01:52:28 np0005603541 python3.9[100659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 01:52:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=54'463 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:28 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 01:52:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 01:52:28 np0005603541 python3.9[100793]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:52:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 88 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 01:52:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:29 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 01:52:29 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 01:52:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 88 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 01:52:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 01:52:30 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 luod=0'0 crt=53'445 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:30 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:30 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 01:52:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 01:52:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 01:52:31 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 01:52:31 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 luod=0'0 crt=53'438 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:31 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:31 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Jan 31 01:52:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Jan 31 01:52:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 01:52:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 01:52:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 01:52:32 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:34 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 01:52:34 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 01:52:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 198 B/s wr, 13 op/s; 85 B/s, 3 objects/s recovering
Jan 31 01:52:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 31 01:52:34 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 01:52:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 01:52:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 01:52:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 170 B/s wr, 11 op/s; 73 B/s, 2 objects/s recovering
Jan 31 01:52:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 31 01:52:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 01:52:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 01:52:36 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 01:52:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 01:52:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 01:52:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 172 B/s wr, 11 op/s; 74 B/s, 2 objects/s recovering
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 31 01:52:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 01:52:38 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 01:52:38 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 01:52:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:39.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:52:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 01:52:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 01:52:39 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 01:52:40 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 01:52:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:40 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:41 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 01:52:41 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 01:52:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 01:52:41 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 01:52:41 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 01:52:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:41 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 01:52:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:42 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 01:52:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:52:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 01:52:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 01:52:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 01:52:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 luod=0'0 crt=54'458 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:43 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 01:52:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 01:52:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 01:52:44 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 01:52:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:45 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 01:52:45 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 01:52:45 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 01:52:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:45 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 01:52:45 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 01:52:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 31 01:52:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:47.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:52:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:52:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:49.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:52:49
Jan 31 01:52:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:52:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Jan 31 01:52:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 45 B/s, 1 objects/s recovering
Jan 31 01:52:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 01:52:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 01:52:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 01:52:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:53.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:53 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Jan 31 01:52:53 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Jan 31 01:52:53 np0005603541 podman[101166]: 2026-01-31 06:52:53.516582308 +0000 UTC m=+0.099788740 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 01:52:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:53 np0005603541 podman[101166]: 2026-01-31 06:52:53.63190788 +0000 UTC m=+0.215114362 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 podman[101323]: 2026-01-31 06:52:54.191255111 +0000 UTC m=+0.080071824 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:52:54 np0005603541 podman[101344]: 2026-01-31 06:52:54.267799607 +0000 UTC m=+0.058058952 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:52:54 np0005603541 podman[101323]: 2026-01-31 06:52:54.29837287 +0000 UTC m=+0.187189593 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 01:52:54 np0005603541 podman[101390]: 2026-01-31 06:52:54.500987702 +0000 UTC m=+0.055547459 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:52:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:52:54 np0005603541 podman[101411]: 2026-01-31 06:52:54.564996969 +0000 UTC m=+0.051440559 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container)
Jan 31 01:52:54 np0005603541 podman[101390]: 2026-01-31 06:52:54.580340807 +0000 UTC m=+0.134900514 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:52:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:55.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev cb8d8ffc-5c43-42b2-ad3f-21a0d3c58a45 does not exist
Jan 31 01:52:55 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c1e5a6b0-8d27-4291-a4ee-6f9d074712fe does not exist
Jan 31 01:52:55 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev fa77d707-8c55-4d4c-96bd-557b3740030e does not exist
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:52:55 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.004564417 +0000 UTC m=+0.044789604 container create 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:52:56 np0005603541 systemd[75973]: Created slice User Background Tasks Slice.
Jan 31 01:52:56 np0005603541 systemd[75973]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 01:52:56 np0005603541 systemd[1]: Started libpod-conmon-1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab.scope.
Jan 31 01:52:56 np0005603541 systemd[75973]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 01:52:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.071293342 +0000 UTC m=+0.111518539 container init 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:55.978638769 +0000 UTC m=+0.018863986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.077690879 +0000 UTC m=+0.117916056 container start 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.0809419 +0000 UTC m=+0.121167087 container attach 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 01:52:56 np0005603541 dreamy_payne[101719]: 167 167
Jan 31 01:52:56 np0005603541 systemd[1]: libpod-1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab.scope: Deactivated successfully.
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.085900071 +0000 UTC m=+0.126125328 container died 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 01:52:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9b491602b64d9e7c71fd7f252533d9104f33ea10cf8b54ac85965b2beabe364c-merged.mount: Deactivated successfully.
Jan 31 01:52:56 np0005603541 podman[101701]: 2026-01-31 06:52:56.136021136 +0000 UTC m=+0.176246303 container remove 1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_payne, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:52:56 np0005603541 systemd[1]: libpod-conmon-1d15ac20d57fdc90b93cbd40950fe8de9f32582cb4a44bf5d12a913576a0b3ab.scope: Deactivated successfully.
Jan 31 01:52:56 np0005603541 podman[101744]: 2026-01-31 06:52:56.271428823 +0000 UTC m=+0.040221782 container create ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:52:56 np0005603541 systemd[1]: Started libpod-conmon-ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b.scope.
Jan 31 01:52:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 01:52:56 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:56 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:56 np0005603541 podman[101744]: 2026-01-31 06:52:56.252861356 +0000 UTC m=+0.021654315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:56 np0005603541 podman[101744]: 2026-01-31 06:52:56.349440155 +0000 UTC m=+0.118233114 container init ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:52:56 np0005603541 podman[101744]: 2026-01-31 06:52:56.360177279 +0000 UTC m=+0.128970238 container start ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 01:52:56 np0005603541 podman[101744]: 2026-01-31 06:52:56.363820199 +0000 UTC m=+0.132613158 container attach ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:52:56 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 01:52:56 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.932495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842376932623, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7600, "num_deletes": 251, "total_data_size": 9754593, "memory_usage": 9925264, "flush_reason": "Manual Compaction"}
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842376981106, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7964613, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 137, "largest_seqno": 7728, "table_properties": {"data_size": 7936238, "index_size": 18532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8773, "raw_key_size": 82701, "raw_average_key_size": 23, "raw_value_size": 7868673, "raw_average_value_size": 2248, "num_data_blocks": 816, "num_entries": 3500, "num_filter_entries": 3500, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842018, "oldest_key_time": 1769842018, "file_creation_time": 1769842376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 48656 microseconds, and 13950 cpu microseconds.
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.981159) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7964613 bytes OK
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.981177) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.982394) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.982408) EVENT_LOG_v1 {"time_micros": 1769842376982404, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.982425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9721026, prev total WAL file size 9721026, number of live WAL files 2.
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.983560) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7777KB) 13(51KB) 8(1944B)]
Jan 31 01:52:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842376983678, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8019017, "oldest_snapshot_seqno": -1}
Jan 31 01:52:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3309 keys, 7975731 bytes, temperature: kUnknown
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377035416, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7975731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7947791, "index_size": 18550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80538, "raw_average_key_size": 24, "raw_value_size": 7882041, "raw_average_value_size": 2382, "num_data_blocks": 820, "num_entries": 3309, "num_filter_entries": 3309, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769842376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:57.035706) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7975731 bytes
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:57.037445) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.7 rd, 153.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3600, records dropped: 291 output_compression: NoCompression
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:57.037462) EVENT_LOG_v1 {"time_micros": 1769842377037454, "job": 4, "event": "compaction_finished", "compaction_time_micros": 51821, "compaction_time_cpu_micros": 16580, "output_level": 6, "num_output_files": 1, "total_output_size": 7975731, "num_input_records": 3600, "num_output_records": 3309, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377038043, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377038075, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377038097, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:52:56.983401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:52:57 np0005603541 angry_mendeleev[101761]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:52:57 np0005603541 angry_mendeleev[101761]: --> relative data size: 1.0
Jan 31 01:52:57 np0005603541 angry_mendeleev[101761]: --> All data devices are unavailable
Jan 31 01:52:57 np0005603541 systemd[1]: libpod-ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b.scope: Deactivated successfully.
Jan 31 01:52:57 np0005603541 podman[101744]: 2026-01-31 06:52:57.129554726 +0000 UTC m=+0.898347685 container died ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 01:52:57 np0005603541 systemd[1]: var-lib-containers-storage-overlay-48bdf2fe13892ae5e0924bdfd9e18ebdb9ca61192b4c361f7b44fa97480240a9-merged.mount: Deactivated successfully.
Jan 31 01:52:57 np0005603541 podman[101744]: 2026-01-31 06:52:57.178986434 +0000 UTC m=+0.947779413 container remove ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:52:57 np0005603541 systemd[1]: libpod-conmon-ba76660bf13984caadf12429fb2e9d4d63a6e663f75233a604e38da59106526b.scope: Deactivated successfully.
Jan 31 01:52:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:52:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:57.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:52:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 01:52:57 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.732023899 +0000 UTC m=+0.041352690 container create 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 31 01:52:57 np0005603541 systemd[1]: Started libpod-conmon-9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8.scope.
Jan 31 01:52:57 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.713425561 +0000 UTC m=+0.022754363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.826029315 +0000 UTC m=+0.135358126 container init 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.832398583 +0000 UTC m=+0.141727364 container start 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:52:57 np0005603541 confident_zhukovsky[101949]: 167 167
Jan 31 01:52:57 np0005603541 systemd[1]: libpod-9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8.scope: Deactivated successfully.
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.839485477 +0000 UTC m=+0.148814288 container attach 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.840957814 +0000 UTC m=+0.150286635 container died 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:52:57 np0005603541 systemd[1]: var-lib-containers-storage-overlay-52aba91b7b318843eaa128b525a68337dd6173823ba9e391031984bcabe10238-merged.mount: Deactivated successfully.
Jan 31 01:52:57 np0005603541 podman[101932]: 2026-01-31 06:52:57.934512629 +0000 UTC m=+0.243841420 container remove 9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:52:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:57 np0005603541 systemd[1]: libpod-conmon-9c5a32998a228b42bae18b67786523d130d15939a04045ce34adedcac2094bc8.scope: Deactivated successfully.
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.08111928 +0000 UTC m=+0.047628784 container create 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 01:52:58 np0005603541 systemd[1]: Started libpod-conmon-770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee.scope.
Jan 31 01:52:58 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0eec3dbf3146e37f587ce28616162c4085bf29fc9c3bac8d723d0185b948946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0eec3dbf3146e37f587ce28616162c4085bf29fc9c3bac8d723d0185b948946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0eec3dbf3146e37f587ce28616162c4085bf29fc9c3bac8d723d0185b948946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:58 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0eec3dbf3146e37f587ce28616162c4085bf29fc9c3bac8d723d0185b948946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.055918289 +0000 UTC m=+0.022427823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.153971145 +0000 UTC m=+0.120480649 container init 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.159491591 +0000 UTC m=+0.126001085 container start 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.216014454 +0000 UTC m=+0.182523968 container attach 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:52:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]: {
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:    "0": [
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:        {
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "devices": [
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "/dev/loop3"
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            ],
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "lv_name": "ceph_lv0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "lv_size": "7511998464",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "name": "ceph_lv0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "tags": {
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.cluster_name": "ceph",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.crush_device_class": "",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.encrypted": "0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.osd_id": "0",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.type": "block",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:                "ceph.vdo": "0"
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            },
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "type": "block",
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:            "vg_name": "ceph_vg0"
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:        }
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]:    ]
Jan 31 01:52:58 np0005603541 loving_sutherland[101990]: }
Jan 31 01:52:58 np0005603541 systemd[1]: libpod-770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee.scope: Deactivated successfully.
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.834537293 +0000 UTC m=+0.801046787 container died 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:52:58 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f0eec3dbf3146e37f587ce28616162c4085bf29fc9c3bac8d723d0185b948946-merged.mount: Deactivated successfully.
Jan 31 01:52:58 np0005603541 podman[101974]: 2026-01-31 06:52:58.944475092 +0000 UTC m=+0.910984586 container remove 770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 01:52:58 np0005603541 systemd[1]: libpod-conmon-770d371f8c0268d0bfb24ab1dab4c1ea4191ef1edbb9c0e90ab61e2953d546ee.scope: Deactivated successfully.
Jan 31 01:52:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:52:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:59.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:52:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:52:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:52:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:52:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:59.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.481024602 +0000 UTC m=+0.035758132 container create 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:52:59 np0005603541 systemd[1]: Started libpod-conmon-8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b.scope.
Jan 31 01:52:59 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.464511275 +0000 UTC m=+0.019244815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.575036208 +0000 UTC m=+0.129769728 container init 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.582498792 +0000 UTC m=+0.137232312 container start 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.585882615 +0000 UTC m=+0.140616135 container attach 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:52:59 np0005603541 stoic_solomon[102172]: 167 167
Jan 31 01:52:59 np0005603541 systemd[1]: libpod-8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b.scope: Deactivated successfully.
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.58771111 +0000 UTC m=+0.142444650 container died 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:52:59 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5bf26c88e883f1e2cf8956be2323628b3ad157428d5afa8be4ee35c2f5ad79f3-merged.mount: Deactivated successfully.
Jan 31 01:52:59 np0005603541 podman[102155]: 2026-01-31 06:52:59.625107942 +0000 UTC m=+0.179841462 container remove 8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:52:59 np0005603541 systemd[1]: libpod-conmon-8aa9c151b5bc84d4b86ccd11d926535e1265582640ca70bcb7c5a35fb5f7db2b.scope: Deactivated successfully.
Jan 31 01:52:59 np0005603541 podman[102198]: 2026-01-31 06:52:59.753794282 +0000 UTC m=+0.050533975 container create ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:52:59 np0005603541 systemd[1]: Started libpod-conmon-ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d.scope.
Jan 31 01:52:59 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:52:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db77b6e407ad5205a0e919dbbd212b485a0013a9ce48ab68c2fe1a7b8f36caeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db77b6e407ad5205a0e919dbbd212b485a0013a9ce48ab68c2fe1a7b8f36caeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db77b6e407ad5205a0e919dbbd212b485a0013a9ce48ab68c2fe1a7b8f36caeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db77b6e407ad5205a0e919dbbd212b485a0013a9ce48ab68c2fe1a7b8f36caeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:52:59 np0005603541 podman[102198]: 2026-01-31 06:52:59.736519587 +0000 UTC m=+0.033259300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:52:59 np0005603541 podman[102198]: 2026-01-31 06:52:59.836721055 +0000 UTC m=+0.133460768 container init ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:52:59 np0005603541 podman[102198]: 2026-01-31 06:52:59.842023106 +0000 UTC m=+0.138762789 container start ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:52:59 np0005603541 podman[102198]: 2026-01-31 06:52:59.858230806 +0000 UTC m=+0.154970549 container attach ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:53:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]: {
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:        "osd_id": 0,
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:        "type": "bluestore"
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]:    }
Jan 31 01:53:00 np0005603541 adoring_jackson[102214]: }
Jan 31 01:53:00 np0005603541 systemd[1]: libpod-ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d.scope: Deactivated successfully.
Jan 31 01:53:00 np0005603541 podman[102198]: 2026-01-31 06:53:00.643132754 +0000 UTC m=+0.939872447 container died ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:53:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-db77b6e407ad5205a0e919dbbd212b485a0013a9ce48ab68c2fe1a7b8f36caeb-merged.mount: Deactivated successfully.
Jan 31 01:53:00 np0005603541 podman[102198]: 2026-01-31 06:53:00.701225856 +0000 UTC m=+0.997965549 container remove ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:53:00 np0005603541 systemd[1]: libpod-conmon-ed36f50cdbd29be80e51b381505faca1e3aaa631240eb11b8d888d41fa77331d.scope: Deactivated successfully.
Jan 31 01:53:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:53:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:01.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:53:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:53:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:53:01 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 87e7d2f5-0020-4720-ae70-00f15a5e6e57 does not exist
Jan 31 01:53:01 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 53359895-4cca-4419-8b16-39355a0d4f78 does not exist
Jan 31 01:53:01 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 81eaae44-c4d4-44f8-bc73-b254d856bb9e does not exist
Jan 31 01:53:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:02 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 01:53:02 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 01:53:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:53:02 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:53:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:03.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:03 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 01:53:03 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 01:53:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Jan 31 01:53:04 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Jan 31 01:53:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:05 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Jan 31 01:53:05 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Jan 31 01:53:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 01:53:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 01:53:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:53:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:12 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 01:53:12 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 01:53:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:13.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:15.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Jan 31 01:53:15 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Jan 31 01:53:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:53:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:53:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:17 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Jan 31 01:53:17 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Jan 31 01:53:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:18 np0005603541 python3.9[102508]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:53:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:19.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:19 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 01:53:19 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 01:53:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 01:53:20 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 01:53:20 np0005603541 python3.9[102796]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 01:53:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:21 np0005603541 python3.9[102948]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 01:53:21 np0005603541 python3.9[103101]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 01:53:22 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 01:53:22 np0005603541 python3.9[103253]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 01:53:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:24 np0005603541 python3.9[103406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:53:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:24 np0005603541 python3.9[103558]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:53:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:25 np0005603541 python3.9[103636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:26 np0005603541 python3.9[103789]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:53:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:27 np0005603541 python3.9[103943]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 01:53:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:28 np0005603541 python3.9[104097]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 01:53:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:29 np0005603541 python3.9[104300]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:53:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:29 np0005603541 python3.9[104453]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 01:53:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:30 np0005603541 python3.9[104605]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:53:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:53:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:31.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:53:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:31 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 01:53:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:31.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:31 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 01:53:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:32 np0005603541 python3.9[104759]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:53:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:33 np0005603541 python3.9[104912]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:53:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:34 np0005603541 python3.9[104991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:53:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:34 np0005603541 python3.9[105143]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:53:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:35 np0005603541 python3.9[105221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:53:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:36 np0005603541 python3.9[105374]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:53:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:37.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:38 np0005603541 python3.9[105526]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:53:38 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 01:53:38 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 01:53:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:39 np0005603541 python3.9[105678]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 01:53:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:39.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:39 np0005603541 python3.9[105828]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:53:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 01:53:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 01:53:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:41 np0005603541 python3.9[105981]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:53:41 np0005603541 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 01:53:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:41.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:41 np0005603541 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 01:53:41 np0005603541 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 01:53:41 np0005603541 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 01:53:41 np0005603541 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 01:53:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:41.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:41 np0005603541 python3.9[106143]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 01:53:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:43 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 01:53:43 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 01:53:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:43.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:45.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:45.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:45 np0005603541 python3.9[106296]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:53:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:46 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Jan 31 01:53:46 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Jan 31 01:53:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:46 np0005603541 python3.9[106451]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:53:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:47 np0005603541 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 01:53:47 np0005603541 systemd[1]: session-35.scope: Consumed 1min 268ms CPU time.
Jan 31 01:53:47 np0005603541 systemd-logind[817]: Session 35 logged out. Waiting for processes to exit.
Jan 31 01:53:47 np0005603541 systemd-logind[817]: Removed session 35.
Jan 31 01:53:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:47.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:53:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:53:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:53:49
Jan 31 01:53:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:53:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:53:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'volumes', '.mgr']
Jan 31 01:53:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:53:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:49.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:50 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Jan 31 01:53:50 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Jan 31 01:53:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:51 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 01:53:51 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 01:53:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 01:53:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 01:53:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:52 np0005603541 systemd-logind[817]: New session 36 of user zuul.
Jan 31 01:53:52 np0005603541 systemd[1]: Started Session 36 of User zuul.
Jan 31 01:53:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:53 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 01:53:53 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 01:53:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:53 np0005603541 python3.9[106684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:53:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:53:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:53:54 np0005603541 python3.9[106841]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 01:53:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:55 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 01:53:55 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 01:53:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:55.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:55 np0005603541 python3.9[106994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:53:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:56 np0005603541 python3.9[107079]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 01:53:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:57.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:58 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 01:53:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:53:58 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 01:53:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:58 np0005603541 python3.9[107233]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:53:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:53:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:53:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:53:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:53:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:53:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:53:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:59.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:53:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:53:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 01:54:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 01:54:01 np0005603541 python3.9[107387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 01:54:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 01:54:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 01:54:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:02 np0005603541 python3.9[107541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:02 np0005603541 podman[107790]: 2026-01-31 06:54:02.685801047 +0000 UTC m=+0.055735766 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:02 np0005603541 podman[107790]: 2026-01-31 06:54:02.782872904 +0000 UTC m=+0.152807613 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 01:54:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:03 np0005603541 python3.9[107900]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 01:54:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:03 np0005603541 podman[108026]: 2026-01-31 06:54:03.331911731 +0000 UTC m=+0.049074263 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:54:03 np0005603541 podman[108026]: 2026-01-31 06:54:03.343990975 +0000 UTC m=+0.061153487 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 01:54:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:03.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:03 np0005603541 podman[108113]: 2026-01-31 06:54:03.528344296 +0000 UTC m=+0.059697890 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, description=keepalived for Ceph, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2)
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:54:03 np0005603541 podman[108134]: 2026-01-31 06:54:03.677829351 +0000 UTC m=+0.135329109 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:54:03 np0005603541 podman[108113]: 2026-01-31 06:54:03.696013573 +0000 UTC m=+0.227367097 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, release=1793, io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:04 np0005603541 python3.9[108320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:04 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 98f132ae-fedf-4276-91c0-6bd22f4f7283 does not exist
Jan 31 01:54:04 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 66b80f11-f361-4f19-be9e-7d3d82f35e23 does not exist
Jan 31 01:54:04 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c2b6f4ab-6869-4f77-b416-4be85182140d does not exist
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:54:04 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:54:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:05 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:54:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:54:05 np0005603541 python3.9[108574]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:05 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.395247307 +0000 UTC m=+0.092499039 container create e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:54:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:05.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:05 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.324173494 +0000 UTC m=+0.021425246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:05 np0005603541 systemd[1]: Started libpod-conmon-e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca.scope.
Jan 31 01:54:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.624639506 +0000 UTC m=+0.321891298 container init e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.631841342 +0000 UTC m=+0.329093064 container start e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:05 np0005603541 pensive_bose[108719]: 167 167
Jan 31 01:54:05 np0005603541 systemd[1]: libpod-e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca.scope: Deactivated successfully.
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.714794613 +0000 UTC m=+0.412046385 container attach e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:54:05 np0005603541 podman[108703]: 2026-01-31 06:54:05.715325798 +0000 UTC m=+0.412577560 container died e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 01:54:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d40444ac4b9880434251d037cedc10a7fa4eab234d1c1cd674a8e26f2f04d005-merged.mount: Deactivated successfully.
Jan 31 01:54:06 np0005603541 podman[108703]: 2026-01-31 06:54:06.081117503 +0000 UTC m=+0.778369245 container remove e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:54:06 np0005603541 systemd[1]: libpod-conmon-e3c49f8e2f1cddc86ff4ec4b00470c346d3ed39c246ac8bd9fe2779e48f3f1ca.scope: Deactivated successfully.
Jan 31 01:54:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:06 np0005603541 podman[108745]: 2026-01-31 06:54:06.215869097 +0000 UTC m=+0.049476294 container create 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 01:54:06 np0005603541 systemd[1]: Started libpod-conmon-05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b.scope.
Jan 31 01:54:06 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:06 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:06 np0005603541 podman[108745]: 2026-01-31 06:54:06.198496757 +0000 UTC m=+0.032103964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:06 np0005603541 podman[108745]: 2026-01-31 06:54:06.296831717 +0000 UTC m=+0.130438944 container init 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 01:54:06 np0005603541 podman[108745]: 2026-01-31 06:54:06.305105531 +0000 UTC m=+0.138712728 container start 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:54:06 np0005603541 podman[108745]: 2026-01-31 06:54:06.308711124 +0000 UTC m=+0.142318321 container attach 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:54:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:06 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 01:54:06 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 01:54:07 np0005603541 flamboyant_wu[108762]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:54:07 np0005603541 flamboyant_wu[108762]: --> relative data size: 1.0
Jan 31 01:54:07 np0005603541 flamboyant_wu[108762]: --> All data devices are unavailable
Jan 31 01:54:07 np0005603541 systemd[1]: libpod-05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b.scope: Deactivated successfully.
Jan 31 01:54:07 np0005603541 podman[108745]: 2026-01-31 06:54:07.073832576 +0000 UTC m=+0.907439753 container died 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fd16cfc465d878a0438ce511886cdbcc192cc1091b01764f6b03a9124ed5b05a-merged.mount: Deactivated successfully.
Jan 31 01:54:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:07 np0005603541 podman[108745]: 2026-01-31 06:54:07.121760618 +0000 UTC m=+0.955367805 container remove 05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 01:54:07 np0005603541 systemd[1]: libpod-conmon-05e8e4eec7cf8a7d22fb383bb17460b620fc15e2491e7e3bde03f8e13b8ec59b.scope: Deactivated successfully.
Jan 31 01:54:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:07 np0005603541 python3.9[108928]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:54:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 01:54:07 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 01:54:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:07.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.663856356 +0000 UTC m=+0.052002230 container create f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:07 np0005603541 systemd[1]: Started libpod-conmon-f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad.scope.
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.638935649 +0000 UTC m=+0.027081593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:07 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.752929326 +0000 UTC m=+0.141075270 container init f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.760662636 +0000 UTC m=+0.148808480 container start f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.764031464 +0000 UTC m=+0.152177398 container attach f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 01:54:07 np0005603541 gracious_blackburn[109137]: 167 167
Jan 31 01:54:07 np0005603541 systemd[1]: libpod-f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad.scope: Deactivated successfully.
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.767170355 +0000 UTC m=+0.155316239 container died f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:54:07 np0005603541 systemd[1]: var-lib-containers-storage-overlay-27bc52096e616235edc75b9115c7f180393a794abb59645ed52a0a88b83b0d2d-merged.mount: Deactivated successfully.
Jan 31 01:54:07 np0005603541 podman[109087]: 2026-01-31 06:54:07.807413119 +0000 UTC m=+0.195558953 container remove f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:54:07 np0005603541 systemd[1]: libpod-conmon-f721008f43b720acc0d1dc5b0c7423b294e9951c62e8005406c5b024cf39dbad.scope: Deactivated successfully.
Jan 31 01:54:07 np0005603541 podman[109256]: 2026-01-31 06:54:07.91348206 +0000 UTC m=+0.038768087 container create 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:54:07 np0005603541 systemd[1]: Started libpod-conmon-3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908.scope.
Jan 31 01:54:07 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986323b6e2a453c104a37eb22de83af8be689dbe8b3388350583ecfc60669490/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986323b6e2a453c104a37eb22de83af8be689dbe8b3388350583ecfc60669490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986323b6e2a453c104a37eb22de83af8be689dbe8b3388350583ecfc60669490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:07 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986323b6e2a453c104a37eb22de83af8be689dbe8b3388350583ecfc60669490/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:07 np0005603541 podman[109256]: 2026-01-31 06:54:07.984575322 +0000 UTC m=+0.109861369 container init 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:54:07 np0005603541 podman[109256]: 2026-01-31 06:54:07.894515527 +0000 UTC m=+0.019801614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:07 np0005603541 podman[109256]: 2026-01-31 06:54:07.990529827 +0000 UTC m=+0.115815854 container start 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:54:07 np0005603541 podman[109256]: 2026-01-31 06:54:07.994817159 +0000 UTC m=+0.120103206 container attach 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:54:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:08 np0005603541 friendly_pare[109272]: {
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:    "0": [
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:        {
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "devices": [
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "/dev/loop3"
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            ],
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "lv_name": "ceph_lv0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "lv_size": "7511998464",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "name": "ceph_lv0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "tags": {
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.cluster_name": "ceph",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.crush_device_class": "",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.encrypted": "0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.osd_id": "0",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.type": "block",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:                "ceph.vdo": "0"
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            },
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "type": "block",
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:            "vg_name": "ceph_vg0"
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:        }
Jan 31 01:54:08 np0005603541 friendly_pare[109272]:    ]
Jan 31 01:54:08 np0005603541 friendly_pare[109272]: }
Jan 31 01:54:08 np0005603541 systemd[1]: libpod-3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908.scope: Deactivated successfully.
Jan 31 01:54:08 np0005603541 podman[109256]: 2026-01-31 06:54:08.74494082 +0000 UTC m=+0.870226857 container died 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:54:08 np0005603541 python3.9[109478]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 01:54:08 np0005603541 systemd[1]: var-lib-containers-storage-overlay-986323b6e2a453c104a37eb22de83af8be689dbe8b3388350583ecfc60669490-merged.mount: Deactivated successfully.
Jan 31 01:54:08 np0005603541 podman[109256]: 2026-01-31 06:54:08.930382269 +0000 UTC m=+1.055668286 container remove 3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:54:08 np0005603541 systemd[1]: libpod-conmon-3767b0bb2ce6d9e58af85337bb40a8ab3d91280e5e9abaeefc1abe80e9567908.scope: Deactivated successfully.
Jan 31 01:54:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:54:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:09.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:54:09 np0005603541 python3.9[109746]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.461570103 +0000 UTC m=+0.044125684 container create dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 01:54:09 np0005603541 systemd[1]: Started libpod-conmon-dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff.scope.
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.441258737 +0000 UTC m=+0.023814318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.555688794 +0000 UTC m=+0.138244385 container init dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.563671491 +0000 UTC m=+0.146227042 container start dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:54:09 np0005603541 suspicious_roentgen[109812]: 167 167
Jan 31 01:54:09 np0005603541 systemd[1]: libpod-dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff.scope: Deactivated successfully.
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.567798068 +0000 UTC m=+0.150353659 container attach dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.568086775 +0000 UTC m=+0.150642326 container died dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:54:09 np0005603541 systemd[1]: var-lib-containers-storage-overlay-115a81fd52ec93dfe826db432930c64e3f3aaa9e4007c71c93b3b318de54436b-merged.mount: Deactivated successfully.
Jan 31 01:54:09 np0005603541 podman[109788]: 2026-01-31 06:54:09.606548973 +0000 UTC m=+0.189104514 container remove dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:54:09 np0005603541 systemd[1]: libpod-conmon-dc3af367cbcb9020f243b7e4292fb3ba722e5612e91f8f5ebe78c967cbf612ff.scope: Deactivated successfully.
Jan 31 01:54:09 np0005603541 podman[109881]: 2026-01-31 06:54:09.735978499 +0000 UTC m=+0.038671383 container create e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:54:09 np0005603541 systemd[1]: Started libpod-conmon-e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c.scope.
Jan 31 01:54:09 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:54:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e631b9e67d83983056341c1300347dac9550bc52648f55b9268862c85685d8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e631b9e67d83983056341c1300347dac9550bc52648f55b9268862c85685d8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e631b9e67d83983056341c1300347dac9550bc52648f55b9268862c85685d8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:09 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e631b9e67d83983056341c1300347dac9550bc52648f55b9268862c85685d8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:54:09 np0005603541 podman[109881]: 2026-01-31 06:54:09.718773423 +0000 UTC m=+0.021466327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:54:09 np0005603541 podman[109881]: 2026-01-31 06:54:09.82777524 +0000 UTC m=+0.130468134 container init e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 31 01:54:09 np0005603541 podman[109881]: 2026-01-31 06:54:09.832723628 +0000 UTC m=+0.135416512 container start e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 01:54:09 np0005603541 podman[109881]: 2026-01-31 06:54:09.841676321 +0000 UTC m=+0.144369215 container attach e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:54:10 np0005603541 python3.9[110004]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]: {
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:        "osd_id": 0,
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:        "type": "bluestore"
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]:    }
Jan 31 01:54:10 np0005603541 inspiring_wing[109932]: }
Jan 31 01:54:10 np0005603541 systemd[1]: libpod-e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c.scope: Deactivated successfully.
Jan 31 01:54:10 np0005603541 podman[109881]: 2026-01-31 06:54:10.662138477 +0000 UTC m=+0.964831411 container died e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:54:10 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5e631b9e67d83983056341c1300347dac9550bc52648f55b9268862c85685d8d-merged.mount: Deactivated successfully.
Jan 31 01:54:10 np0005603541 podman[109881]: 2026-01-31 06:54:10.716248779 +0000 UTC m=+1.018941683 container remove e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:54:10 np0005603541 systemd[1]: libpod-conmon-e359442f165608604de436291bb9a63a62a99a7178eee138fdce2b8bd23b774c.scope: Deactivated successfully.
Jan 31 01:54:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:54:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:54:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 6099d7a9-bae4-441f-a536-d6adaca77e37 does not exist
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 2f39d65b-e005-432a-b047-3a85681a74c8 does not exist
Jan 31 01:54:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 33790dbc-a27b-43bc-a6ab-386e8fcad600 does not exist
Jan 31 01:54:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:11 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:54:11 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 01:54:11 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 01:54:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:54:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:11.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:54:12 np0005603541 python3.9[110235]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:13.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Jan 31 01:54:14 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Jan 31 01:54:14 np0005603541 python3.9[110389]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:54:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:15 np0005603541 python3.9[110543]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 01:54:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:15.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:16 np0005603541 systemd-logind[817]: Session 36 logged out. Waiting for processes to exit.
Jan 31 01:54:16 np0005603541 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 01:54:16 np0005603541 systemd[1]: session-36.scope: Consumed 16.012s CPU time.
Jan 31 01:54:16 np0005603541 systemd-logind[817]: Removed session 36.
Jan 31 01:54:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:54:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:54:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:17.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:54:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:19.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:54:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:21.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:21 np0005603541 systemd-logind[817]: New session 37 of user zuul.
Jan 31 01:54:21 np0005603541 systemd[1]: Started Session 37 of User zuul.
Jan 31 01:54:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:22 np0005603541 python3.9[110725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:23.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:23.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:23 np0005603541 python3.9[110879]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:54:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:24 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 01:54:24 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 01:54:24 np0005603541 python3.9[111073]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:54:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:25 np0005603541 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 01:54:25 np0005603541 systemd[1]: session-37.scope: Consumed 2.051s CPU time.
Jan 31 01:54:25 np0005603541 systemd-logind[817]: Session 37 logged out. Waiting for processes to exit.
Jan 31 01:54:25 np0005603541 systemd-logind[817]: Removed session 37.
Jan 31 01:54:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:25.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 01:54:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:27.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 01:54:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:27.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 01:54:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:29.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 01:54:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:29.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:30 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 01:54:30 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 01:54:30 np0005603541 systemd-logind[817]: New session 38 of user zuul.
Jan 31 01:54:30 np0005603541 systemd[1]: Started Session 38 of User zuul.
Jan 31 01:54:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:31 np0005603541 python3.9[111305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Jan 31 01:54:32 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Jan 31 01:54:32 np0005603541 python3.9[111460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:33.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:33.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:33 np0005603541 python3.9[111616]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:54:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:34 np0005603541 python3.9[111701]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:35 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 01:54:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:35 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 01:54:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:35.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:36 np0005603541 python3.9[111855]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:54:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:37.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:54:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:54:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:37 np0005603541 python3.9[112051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:54:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:38 np0005603541 python3.9[112204]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:54:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:39 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Jan 31 01:54:39 np0005603541 python3.9[112367]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:54:39 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Jan 31 01:54:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:39 np0005603541 python3.9[112445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:54:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 01:54:40 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 01:54:40 np0005603541 python3.9[112598]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:54:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:40 np0005603541 python3.9[112676]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:54:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:54:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:41.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:54:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:41.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:41 np0005603541 python3.9[112828]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:54:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:42 np0005603541 python3.9[112981]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:54:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:42 np0005603541 python3.9[113133]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:54:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:43 np0005603541 python3.9[113285]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:54:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:43.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:43.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:44 np0005603541 python3.9[113438]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:54:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:45.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:54:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:54:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:54:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:46 np0005603541 python3.9[113592]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:54:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:47 np0005603541 python3.9[113746]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:54:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:47.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:47 np0005603541 python3.9[113898]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:54:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:54:48 np0005603541 python3.9[114051]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:54:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:54:49
Jan 31 01:54:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:54:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:54:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', 'images', 'backups', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.control']
Jan 31 01:54:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:54:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:49.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:49.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:49 np0005603541 python3.9[114254]: ansible-service_facts Invoked
Jan 31 01:54:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:50 np0005603541 network[114272]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 01:54:50 np0005603541 network[114273]: 'network-scripts' will be removed from distribution in near future.
Jan 31 01:54:50 np0005603541 network[114274]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 01:54:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:51.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:51.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:53.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:53.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:54:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:54:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:54 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:55.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:55 np0005603541 python3.9[114728]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:54:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:57.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:54:58 np0005603541 python3.9[114883]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 01:54:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:54:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:54:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:54:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:59.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:54:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:54:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:54:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:59.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:54:59 np0005603541 python3.9[115036]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:54:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:54:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:00 np0005603541 python3.9[115114]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:01 np0005603541 python3.9[115266]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:01.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:01 np0005603541 python3.9[115344]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:03 np0005603541 python3.9[115497]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:55:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:55:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:04 np0005603541 python3.9[115650]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:55:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:05 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:05.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:05 np0005603541 python3.9[115734]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:55:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:06 np0005603541 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 01:55:06 np0005603541 systemd[1]: session-38.scope: Consumed 20.912s CPU time.
Jan 31 01:55:06 np0005603541 systemd-logind[817]: Session 38 logged out. Waiting for processes to exit.
Jan 31 01:55:06 np0005603541 systemd-logind[817]: Removed session 38.
Jan 31 01:55:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 01:55:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:07.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 01:55:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:09.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:55:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:11.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:11.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:11 np0005603541 systemd-logind[817]: New session 39 of user zuul.
Jan 31 01:55:11 np0005603541 systemd[1]: Started Session 39 of User zuul.
Jan 31 01:55:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:12 np0005603541 python3.9[116102]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:55:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:55:12 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:13.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 766e7499-ce6f-4d1a-9842-2c77a45b2bc5 does not exist
Jan 31 01:55:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev e49a9f9c-54fc-4ef3-a6c4-616fcc248ea4 does not exist
Jan 31 01:55:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 463d720f-318a-4d3c-8569-bad5981519a3 does not exist
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:55:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:55:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:13 np0005603541 python3.9[116254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:13 np0005603541 podman[116473]: 2026-01-31 06:55:13.977363934 +0000 UTC m=+0.048013301 container create ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:55:14 np0005603541 python3.9[116451]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:14 np0005603541 systemd[1]: Started libpod-conmon-ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a.scope.
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:13.954862181 +0000 UTC m=+0.025511528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:14 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:14.068639227 +0000 UTC m=+0.139288614 container init ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:14.074586424 +0000 UTC m=+0.145235751 container start ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:55:14 np0005603541 confident_mclaren[116490]: 167 167
Jan 31 01:55:14 np0005603541 systemd[1]: libpod-ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a.scope: Deactivated successfully.
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:14.08299981 +0000 UTC m=+0.153649147 container attach ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:14.083356469 +0000 UTC m=+0.154005806 container died ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 01:55:14 np0005603541 systemd[1]: var-lib-containers-storage-overlay-24fa1a4ec15ad826d83eb452441d91efe82b13b0e3a4c73204921d1fa0c72f8d-merged.mount: Deactivated successfully.
Jan 31 01:55:14 np0005603541 podman[116473]: 2026-01-31 06:55:14.141425455 +0000 UTC m=+0.212074782 container remove ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 01:55:14 np0005603541 systemd[1]: libpod-conmon-ef0df6b386827fd8ba4b8b68a334126a11543ccf3ace5297e650a4d5bc68410a.scope: Deactivated successfully.
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:14 np0005603541 podman[116540]: 2026-01-31 06:55:14.302214337 +0000 UTC m=+0.055326741 container create 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 01:55:14 np0005603541 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 01:55:14 np0005603541 systemd[1]: session-39.scope: Consumed 1.410s CPU time.
Jan 31 01:55:14 np0005603541 systemd-logind[817]: Session 39 logged out. Waiting for processes to exit.
Jan 31 01:55:14 np0005603541 systemd-logind[817]: Removed session 39.
Jan 31 01:55:14 np0005603541 systemd[1]: Started libpod-conmon-6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36.scope.
Jan 31 01:55:14 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:14 np0005603541 podman[116540]: 2026-01-31 06:55:14.276943356 +0000 UTC m=+0.030055800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:14 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:14 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:14 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:14 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:14 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:14 np0005603541 podman[116540]: 2026-01-31 06:55:14.388869726 +0000 UTC m=+0.141982130 container init 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:55:14 np0005603541 podman[116540]: 2026-01-31 06:55:14.395564751 +0000 UTC m=+0.148677155 container start 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 01:55:14 np0005603541 podman[116540]: 2026-01-31 06:55:14.400650366 +0000 UTC m=+0.153762790 container attach 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:55:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:15 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:15 np0005603541 suspicious_lamarr[116558]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:55:15 np0005603541 suspicious_lamarr[116558]: --> relative data size: 1.0
Jan 31 01:55:15 np0005603541 suspicious_lamarr[116558]: --> All data devices are unavailable
Jan 31 01:55:15 np0005603541 systemd[1]: libpod-6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36.scope: Deactivated successfully.
Jan 31 01:55:15 np0005603541 podman[116540]: 2026-01-31 06:55:15.151412994 +0000 UTC m=+0.904525398 container died 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:55:15 np0005603541 systemd[1]: var-lib-containers-storage-overlay-58f88ef3edffb1f3fb22ae6c5d2527882f802292d92c6257035ccf331961166c-merged.mount: Deactivated successfully.
Jan 31 01:55:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 01:55:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:15.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 01:55:15 np0005603541 podman[116540]: 2026-01-31 06:55:15.245158908 +0000 UTC m=+0.998271312 container remove 6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:55:15 np0005603541 systemd[1]: libpod-conmon-6d7818fe635a5f5721d284abe38bce58dabcfff3182c2ed68d1b9b67f4a97c36.scope: Deactivated successfully.
Jan 31 01:55:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:15.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.809976077 +0000 UTC m=+0.098074351 container create eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.734117463 +0000 UTC m=+0.022215747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:15 np0005603541 systemd[1]: Started libpod-conmon-eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94.scope.
Jan 31 01:55:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.957408559 +0000 UTC m=+0.245506833 container init eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.966999255 +0000 UTC m=+0.255097529 container start eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.971226909 +0000 UTC m=+0.259325183 container attach eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 01:55:15 np0005603541 dreamy_chandrasekhar[116743]: 167 167
Jan 31 01:55:15 np0005603541 systemd[1]: libpod-eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94.scope: Deactivated successfully.
Jan 31 01:55:15 np0005603541 podman[116727]: 2026-01-31 06:55:15.974215352 +0000 UTC m=+0.262313646 container died eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:55:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ba151cce2878bc03db871dbd495c6a6a6f08bc21234d55745e4a3fee33841a51-merged.mount: Deactivated successfully.
Jan 31 01:55:16 np0005603541 podman[116727]: 2026-01-31 06:55:16.093395912 +0000 UTC m=+0.381494196 container remove eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:55:16 np0005603541 systemd[1]: libpod-conmon-eacd0008fe6aafd0217f22ad2307d90d8afbc5fc324e24d94e2c0950a6da5f94.scope: Deactivated successfully.
Jan 31 01:55:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:16 np0005603541 podman[116766]: 2026-01-31 06:55:16.245465438 +0000 UTC m=+0.060917607 container create 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 01:55:16 np0005603541 systemd[1]: Started libpod-conmon-8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2.scope.
Jan 31 01:55:16 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:16 np0005603541 podman[116766]: 2026-01-31 06:55:16.20322329 +0000 UTC m=+0.018675499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aa9fc5e9957dc017f995f061ea480307271644c359e897d3f1871510502745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aa9fc5e9957dc017f995f061ea480307271644c359e897d3f1871510502745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aa9fc5e9957dc017f995f061ea480307271644c359e897d3f1871510502745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aa9fc5e9957dc017f995f061ea480307271644c359e897d3f1871510502745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:16 np0005603541 podman[116766]: 2026-01-31 06:55:16.330120979 +0000 UTC m=+0.145573158 container init 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:55:16 np0005603541 podman[116766]: 2026-01-31 06:55:16.336463584 +0000 UTC m=+0.151915703 container start 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 01:55:16 np0005603541 podman[116766]: 2026-01-31 06:55:16.342666527 +0000 UTC m=+0.158118676 container attach 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:55:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]: {
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:    "0": [
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:        {
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "devices": [
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "/dev/loop3"
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            ],
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "lv_name": "ceph_lv0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "lv_size": "7511998464",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "name": "ceph_lv0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "tags": {
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.cluster_name": "ceph",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.crush_device_class": "",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.encrypted": "0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.osd_id": "0",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.type": "block",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:                "ceph.vdo": "0"
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            },
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "type": "block",
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:            "vg_name": "ceph_vg0"
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:        }
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]:    ]
Jan 31 01:55:17 np0005603541 clever_hodgkin[116782]: }
Jan 31 01:55:17 np0005603541 systemd[1]: libpod-8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2.scope: Deactivated successfully.
Jan 31 01:55:17 np0005603541 podman[116766]: 2026-01-31 06:55:17.095460525 +0000 UTC m=+0.910912654 container died 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 01:55:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:17 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a0aa9fc5e9957dc017f995f061ea480307271644c359e897d3f1871510502745-merged.mount: Deactivated successfully.
Jan 31 01:55:17 np0005603541 podman[116766]: 2026-01-31 06:55:17.185823146 +0000 UTC m=+1.001275275 container remove 8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:55:17 np0005603541 systemd[1]: libpod-conmon-8033eb57b5024b698751f4c25ce8021ed86747999f5d9a396cc394b636d9afd2.scope: Deactivated successfully.
Jan 31 01:55:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:17.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:17.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.747715063 +0000 UTC m=+0.073656941 container create 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.694837863 +0000 UTC m=+0.020779721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:17 np0005603541 systemd[1]: Started libpod-conmon-4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42.scope.
Jan 31 01:55:17 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.87048208 +0000 UTC m=+0.196423928 container init 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.876331304 +0000 UTC m=+0.202273152 container start 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:55:17 np0005603541 vigorous_ishizaka[116961]: 167 167
Jan 31 01:55:17 np0005603541 systemd[1]: libpod-4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42.scope: Deactivated successfully.
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.88024051 +0000 UTC m=+0.206182358 container attach 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 01:55:17 np0005603541 podman[116946]: 2026-01-31 06:55:17.880959137 +0000 UTC m=+0.206900985 container died 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:55:17 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1e34ca3a55f835fbbede7320a6f255c6adb35888f36c8362697af69c90e65292-merged.mount: Deactivated successfully.
Jan 31 01:55:18 np0005603541 podman[116946]: 2026-01-31 06:55:18.008916112 +0000 UTC m=+0.334857950 container remove 4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 01:55:18 np0005603541 systemd[1]: libpod-conmon-4710be848a80cf91295aed6f5aef88146de1706517f9e13e4f9f0ec1f1092b42.scope: Deactivated successfully.
Jan 31 01:55:18 np0005603541 podman[116989]: 2026-01-31 06:55:18.159691426 +0000 UTC m=+0.070587705 container create 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:55:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:18 np0005603541 podman[116989]: 2026-01-31 06:55:18.110907227 +0000 UTC m=+0.021803516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:55:18 np0005603541 systemd[1]: Started libpod-conmon-678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520.scope.
Jan 31 01:55:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:55:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05e9a1bd2c7475ce446c834f8dae24aa1441257fd0e8d71b47163fa981ba7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05e9a1bd2c7475ce446c834f8dae24aa1441257fd0e8d71b47163fa981ba7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05e9a1bd2c7475ce446c834f8dae24aa1441257fd0e8d71b47163fa981ba7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05e9a1bd2c7475ce446c834f8dae24aa1441257fd0e8d71b47163fa981ba7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:55:18 np0005603541 podman[116989]: 2026-01-31 06:55:18.261855937 +0000 UTC m=+0.172752296 container init 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:55:18 np0005603541 podman[116989]: 2026-01-31 06:55:18.267189488 +0000 UTC m=+0.178085757 container start 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 01:55:18 np0005603541 podman[116989]: 2026-01-31 06:55:18.271658688 +0000 UTC m=+0.182554957 container attach 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:19 np0005603541 serene_villani[117006]: {
Jan 31 01:55:19 np0005603541 serene_villani[117006]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:55:19 np0005603541 serene_villani[117006]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:55:19 np0005603541 serene_villani[117006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:55:19 np0005603541 serene_villani[117006]:        "osd_id": 0,
Jan 31 01:55:19 np0005603541 serene_villani[117006]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:55:19 np0005603541 serene_villani[117006]:        "type": "bluestore"
Jan 31 01:55:19 np0005603541 serene_villani[117006]:    }
Jan 31 01:55:19 np0005603541 serene_villani[117006]: }
Jan 31 01:55:19 np0005603541 systemd[1]: libpod-678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520.scope: Deactivated successfully.
Jan 31 01:55:19 np0005603541 podman[116989]: 2026-01-31 06:55:19.054526335 +0000 UTC m=+0.965422604 container died 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:55:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6a05e9a1bd2c7475ce446c834f8dae24aa1441257fd0e8d71b47163fa981ba7c-merged.mount: Deactivated successfully.
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:19 np0005603541 podman[116989]: 2026-01-31 06:55:19.374662042 +0000 UTC m=+1.285558341 container remove 678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 01:55:19 np0005603541 systemd-logind[817]: New session 40 of user zuul.
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:55:19 np0005603541 systemd[1]: Started Session 40 of User zuul.
Jan 31 01:55:19 np0005603541 systemd[1]: libpod-conmon-678ee6c72d7978c7c4c67bc7b0417e80d3e8dca99aca5d711d64a53e59ad4520.scope: Deactivated successfully.
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:55:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:19 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:19 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 674088ad-c180-41e6-ae20-f092ddef272f does not exist
Jan 31 01:55:19 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev cbc4acc9-d327-4bb4-86b4-a6b84252ddb7 does not exist
Jan 31 01:55:19 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 81ad9808-8eaf-4dc3-953c-adc40fb05e24 does not exist
Jan 31 01:55:20 np0005603541 python3.9[117243]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:55:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:20 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:20 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:55:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:21 np0005603541 python3.9[117399]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:22 np0005603541 python3.9[117575]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:22 np0005603541 python3.9[117653]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.kywb9_fe recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:23.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:23 np0005603541 python3.9[117805]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:24 np0005603541 python3.9[117884]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.y7cu6g9j recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:24 np0005603541 python3.9[118036]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:55:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:25.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:25 np0005603541 python3.9[118188]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:25 np0005603541 python3.9[118267]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:55:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:26 np0005603541 python3.9[118419]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:27 np0005603541 python3.9[118497]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:55:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:27 np0005603541 python3.9[118649]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:29.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:29 np0005603541 python3.9[118852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:30 np0005603541 python3.9[118931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:30 np0005603541 python3.9[119083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:55:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:31.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:55:31 np0005603541 python3.9[119161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:55:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:55:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:32 np0005603541 python3.9[119314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:55:32 np0005603541 systemd[1]: Reloading.
Jan 31 01:55:32 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:55:32 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:55:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:33.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:33 np0005603541 python3.9[119503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:33 np0005603541 python3.9[119582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:35 np0005603541 python3.9[119734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:35.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:35 np0005603541 python3.9[119812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:36 np0005603541 python3.9[119965]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:55:36 np0005603541 systemd[1]: Reloading.
Jan 31 01:55:36 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:55:36 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:55:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:36 np0005603541 systemd[1]: Starting Create netns directory...
Jan 31 01:55:36 np0005603541 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 01:55:36 np0005603541 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 01:55:36 np0005603541 systemd[1]: Finished Create netns directory.
Jan 31 01:55:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:55:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:55:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:41.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:55:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:43.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:55:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:43 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 01:55:43 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 31 01:55:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:55:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:45 np0005603541 python3.9[120161]: ansible-ansible.builtin.service_facts Invoked
Jan 31 01:55:45 np0005603541 network[120178]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 01:55:45 np0005603541 network[120179]: 'network-scripts' will be removed from distribution in near future.
Jan 31 01:55:45 np0005603541 network[120180]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 01:55:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:45.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:55:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:55:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:55:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:55:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:48 np0005603541 python3.9[120444]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:55:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:55:48 np0005603541 python3.9[120522]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:48 np0005603541 ceph-mds[93426]: mds.beacon.cephfs.compute-0.kanoes missed beacon ack from the monitors
Jan 31 01:55:48 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 01:55:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:55:49
Jan 31 01:55:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:55:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:55:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images']
Jan 31 01:55:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.gghdjs(active, since 8m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(18) init, last seen epoch 18
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: paxos.0).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.gghdjs(active, since 8m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 01:55:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:49.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:49 np0005603541 python3.9[120724]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:50 np0005603541 python3.9[120877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-1 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-2 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-1 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-2 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-0 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-0 calling monitor election
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 01:55:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:50 np0005603541 python3.9[120955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:51 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:51 np0005603541 python3.9[121108]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 01:55:51 np0005603541 systemd[1]: Starting Time & Date Service...
Jan 31 01:55:52 np0005603541 systemd[1]: Started Time & Date Service.
Jan 31 01:55:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:52 np0005603541 python3.9[121264]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:55:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:55:53 np0005603541 python3.9[121416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:53 np0005603541 python3.9[121495]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:55:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:55:54 np0005603541 python3.9[121647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:55 np0005603541 python3.9[121725]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5k6_3n6e recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:55:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:55:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:55 np0005603541 python3.9[121877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:55.891787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842555891827, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2556, "num_deletes": 251, "total_data_size": 3481686, "memory_usage": 3546016, "flush_reason": "Manual Compaction"}
Jan 31 01:55:55 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556002828, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3370565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7729, "largest_seqno": 10284, "table_properties": {"data_size": 3359717, "index_size": 6254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 31713, "raw_average_key_size": 22, "raw_value_size": 3334012, "raw_average_value_size": 2374, "num_data_blocks": 277, "num_entries": 1404, "num_filter_entries": 1404, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842377, "oldest_key_time": 1769842377, "file_creation_time": 1769842555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 111354 microseconds, and 7807 cpu microseconds.
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.003141) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3370565 bytes OK
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.003285) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.024369) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.024411) EVENT_LOG_v1 {"time_micros": 1769842556024401, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.024433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3470143, prev total WAL file size 3470143, number of live WAL files 2.
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.026557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3291KB)], [20(7788KB)]
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556026836, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11346296, "oldest_snapshot_seqno": -1}
Jan 31 01:55:56 np0005603541 python3.9[121956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 4186 keys, 9698275 bytes, temperature: kUnknown
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556183083, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9698275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9665049, "index_size": 21660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102877, "raw_average_key_size": 24, "raw_value_size": 9584043, "raw_average_value_size": 2289, "num_data_blocks": 945, "num_entries": 4186, "num_filter_entries": 4186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769842556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.183456) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9698275 bytes
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.197485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.6 rd, 62.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 4713, records dropped: 527 output_compression: NoCompression
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.197555) EVENT_LOG_v1 {"time_micros": 1769842556197532, "job": 6, "event": "compaction_finished", "compaction_time_micros": 156363, "compaction_time_cpu_micros": 17075, "output_level": 6, "num_output_files": 1, "total_output_size": 9698275, "num_input_records": 4713, "num_output_records": 4186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556198269, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556199615, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.026234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.199681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.199689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.199690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.199692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:55:56.199694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:55:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:57 np0005603541 python3.9[122108]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:55:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:55:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:55:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:57 np0005603541 python3[122262]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 01:55:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:55:58 np0005603541 python3.9[122414]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:55:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:59.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:55:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:55:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:55:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:55:59 np0005603541 python3.9[122492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:55:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:55:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:55:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:01.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:56:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:01.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:56:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:03.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:05 np0005603541 python3.9[122647]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:05.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:05 np0005603541 python3.9[122773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842564.589464-899-16553663116182/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:06 np0005603541 python3.9[122925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:07 np0005603541 python3.9[123003]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:07.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:08 np0005603541 python3.9[123156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:08 np0005603541 python3.9[123234]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:09.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:09 np0005603541 python3.9[123434]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:09 np0005603541 python3.9[123515]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:56:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:10 np0005603541 python3.9[123667]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:11.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:11 np0005603541 python3.9[123822]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:12 np0005603541 python3.9[123975]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:12 np0005603541 python3.9[124127]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:13.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:13 np0005603541 python3.9[124279]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 01:56:14 np0005603541 python3.9[124432]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 01:56:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:15 np0005603541 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 01:56:15 np0005603541 systemd[1]: session-40.scope: Consumed 25.564s CPU time.
Jan 31 01:56:15 np0005603541 systemd-logind[817]: Session 40 logged out. Waiting for processes to exit.
Jan 31 01:56:15 np0005603541 systemd-logind[817]: Removed session 40.
Jan 31 01:56:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:15 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:15.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:56:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:56:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:17.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:19.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:56:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:21.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:56:22 np0005603541 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:22 np0005603541 systemd-logind[817]: New session 41 of user zuul.
Jan 31 01:56:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 10df38aa-a27a-413c-8f8a-ba910dffc579 does not exist
Jan 31 01:56:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 8284552b-a190-42a5-8261-7d8bbe1620aa does not exist
Jan 31 01:56:22 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 108bbad1-e4bb-4bf5-94a3-df863154cc97 does not exist
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:56:22 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:56:22 np0005603541 systemd[1]: Started Session 41 of User zuul.
Jan 31 01:56:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.616455764 +0000 UTC m=+0.064716867 container create 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.575746758 +0000 UTC m=+0.024007891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:22 np0005603541 systemd[1]: Started libpod-conmon-9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e.scope.
Jan 31 01:56:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.717325383 +0000 UTC m=+0.165586496 container init 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.72562573 +0000 UTC m=+0.173886813 container start 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 01:56:22 np0005603541 systemd[1]: libpod-9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e.scope: Deactivated successfully.
Jan 31 01:56:22 np0005603541 silly_hofstadter[124905]: 167 167
Jan 31 01:56:22 np0005603541 conmon[124905]: conmon 9ab0f91d398110d775a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e.scope/container/memory.events
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.73985922 +0000 UTC m=+0.188120303 container attach 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.741262721 +0000 UTC m=+0.189523814 container died 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:22 np0005603541 systemd[1]: var-lib-containers-storage-overlay-aaeeb70ce8aa2034e8d555c4ec2198763e88fdbfcbd0f8f6a6b482c496fd4b00-merged.mount: Deactivated successfully.
Jan 31 01:56:22 np0005603541 podman[124836]: 2026-01-31 06:56:22.83537754 +0000 UTC m=+0.283638643 container remove 9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:56:22 np0005603541 systemd[1]: libpod-conmon-9ab0f91d398110d775a3d3961750a695f919ef4d88ed0ba13e9082a0d1f4595e.scope: Deactivated successfully.
Jan 31 01:56:22 np0005603541 python3.9[124907]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 01:56:22 np0005603541 podman[124930]: 2026-01-31 06:56:22.957892806 +0000 UTC m=+0.052541823 container create 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:56:23 np0005603541 systemd[1]: Started libpod-conmon-7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0.scope.
Jan 31 01:56:23 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:56:23 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:23 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:56:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:22.925257882 +0000 UTC m=+0.019906909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:23 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:23 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:23.066381587 +0000 UTC m=+0.161030614 container init 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:23.072425702 +0000 UTC m=+0.167074709 container start 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:23.087271257 +0000 UTC m=+0.181920284 container attach 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:56:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:23.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:23 np0005603541 python3.9[125103]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:56:23 np0005603541 affectionate_curran[124971]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:56:23 np0005603541 affectionate_curran[124971]: --> relative data size: 1.0
Jan 31 01:56:23 np0005603541 affectionate_curran[124971]: --> All data devices are unavailable
Jan 31 01:56:23 np0005603541 systemd[1]: libpod-7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0.scope: Deactivated successfully.
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:23.85813624 +0000 UTC m=+0.952785257 container died 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 01:56:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-50b1cf4bd4c80ef4afcb70a23c87251eb153c3d912620290e0057423229d39de-merged.mount: Deactivated successfully.
Jan 31 01:56:23 np0005603541 podman[124930]: 2026-01-31 06:56:23.944026632 +0000 UTC m=+1.038675649 container remove 7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curran, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 01:56:23 np0005603541 systemd[1]: libpod-conmon-7ef970b303e5e9f481265755e9250905a8e3b02cffb2b0d119bd1bf53023aae0.scope: Deactivated successfully.
Jan 31 01:56:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.457679178 +0000 UTC m=+0.045863682 container create f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:56:24 np0005603541 systemd[1]: Started libpod-conmon-f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981.scope.
Jan 31 01:56:24 np0005603541 python3.9[125392]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 01:56:24 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.432879211 +0000 UTC m=+0.021063725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.544457311 +0000 UTC m=+0.132641805 container init f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.550842455 +0000 UTC m=+0.139026919 container start f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 31 01:56:24 np0005603541 epic_leavitt[125435]: 167 167
Jan 31 01:56:24 np0005603541 systemd[1]: libpod-f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981.scope: Deactivated successfully.
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.558846265 +0000 UTC m=+0.147030729 container attach f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.559409367 +0000 UTC m=+0.147593841 container died f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:24 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fb187493e7681687d50c892e550f9ccd515e672bae1dced1e4947ead4009ae80-merged.mount: Deactivated successfully.
Jan 31 01:56:24 np0005603541 podman[125418]: 2026-01-31 06:56:24.640336738 +0000 UTC m=+0.228521202 container remove f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 01:56:24 np0005603541 systemd[1]: libpod-conmon-f64677c96f8e464fc4deeff972207faf0034f254f2ac2aa8d7e59caf9065f981.scope: Deactivated successfully.
Jan 31 01:56:24 np0005603541 podman[125506]: 2026-01-31 06:56:24.770523137 +0000 UTC m=+0.043214773 container create c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 01:56:24 np0005603541 systemd[1]: Started libpod-conmon-c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd.scope.
Jan 31 01:56:24 np0005603541 podman[125506]: 2026-01-31 06:56:24.750092627 +0000 UTC m=+0.022784283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:24 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2205d512562a3c27f339e412fcdf6109c5c61ad4da37fa968a17133c51d180/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2205d512562a3c27f339e412fcdf6109c5c61ad4da37fa968a17133c51d180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2205d512562a3c27f339e412fcdf6109c5c61ad4da37fa968a17133c51d180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2205d512562a3c27f339e412fcdf6109c5c61ad4da37fa968a17133c51d180/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:24 np0005603541 podman[125506]: 2026-01-31 06:56:24.89554072 +0000 UTC m=+0.168232406 container init c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:56:24 np0005603541 podman[125506]: 2026-01-31 06:56:24.902174559 +0000 UTC m=+0.174866175 container start c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:56:24 np0005603541 podman[125506]: 2026-01-31 06:56:24.909888542 +0000 UTC m=+0.182580168 container attach c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 01:56:25 np0005603541 python3.9[125632]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.phiqi8um follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:25 np0005603541 epic_turing[125573]: {
Jan 31 01:56:25 np0005603541 epic_turing[125573]:    "0": [
Jan 31 01:56:25 np0005603541 epic_turing[125573]:        {
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "devices": [
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "/dev/loop3"
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            ],
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "lv_name": "ceph_lv0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "lv_size": "7511998464",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "name": "ceph_lv0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "tags": {
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.cluster_name": "ceph",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.crush_device_class": "",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.encrypted": "0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.osd_id": "0",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.type": "block",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:                "ceph.vdo": "0"
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            },
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "type": "block",
Jan 31 01:56:25 np0005603541 epic_turing[125573]:            "vg_name": "ceph_vg0"
Jan 31 01:56:25 np0005603541 epic_turing[125573]:        }
Jan 31 01:56:25 np0005603541 epic_turing[125573]:    ]
Jan 31 01:56:25 np0005603541 epic_turing[125573]: }
Jan 31 01:56:25 np0005603541 systemd[1]: libpod-c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd.scope: Deactivated successfully.
Jan 31 01:56:25 np0005603541 podman[125506]: 2026-01-31 06:56:25.651935177 +0000 UTC m=+0.924626763 container died c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 01:56:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4b2205d512562a3c27f339e412fcdf6109c5c61ad4da37fa968a17133c51d180-merged.mount: Deactivated successfully.
Jan 31 01:56:25 np0005603541 podman[125506]: 2026-01-31 06:56:25.753492452 +0000 UTC m=+1.026184048 container remove c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 01:56:25 np0005603541 systemd[1]: libpod-conmon-c6d867a90d8cae4d83c5a2c4dc7fe686dba33d6a5e487ecc337162e89aa0a5dd.scope: Deactivated successfully.
Jan 31 01:56:26 np0005603541 python3.9[125823]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.phiqi8um mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842584.7045872-107-121250161443799/.source.phiqi8um _original_basename=.wdkt60yd follow=False checksum=fe1ebeeefeefbe8ea03479c835e7fd7974336244 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.247011175 +0000 UTC m=+0.040185374 container create e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:26 np0005603541 systemd[1]: Started libpod-conmon-e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd.scope.
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.226038633 +0000 UTC m=+0.019212872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:26 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.344973339 +0000 UTC m=+0.138147578 container init e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.352939639 +0000 UTC m=+0.146113848 container start e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:56:26 np0005603541 boring_hermann[125997]: 167 167
Jan 31 01:56:26 np0005603541 systemd[1]: libpod-e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd.scope: Deactivated successfully.
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.36325061 +0000 UTC m=+0.156424839 container attach e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.364922268 +0000 UTC m=+0.158096497 container died e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:56:26 np0005603541 systemd[1]: var-lib-containers-storage-overlay-74678d03d9362ecee81573e7a11d421bc61ead3dd474e6e6a54871e6bafbadec-merged.mount: Deactivated successfully.
Jan 31 01:56:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:26 np0005603541 podman[125940]: 2026-01-31 06:56:26.427711241 +0000 UTC m=+0.220885450 container remove e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hermann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:56:26 np0005603541 systemd[1]: libpod-conmon-e8f8021f5d164184e30fa9437e89aee41e053969f43a909d7bfa3609ccdcd4bd.scope: Deactivated successfully.
Jan 31 01:56:26 np0005603541 podman[126031]: 2026-01-31 06:56:26.577371268 +0000 UTC m=+0.050273042 container create 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:56:26 np0005603541 systemd[1]: Started libpod-conmon-8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763.scope.
Jan 31 01:56:26 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:56:26 np0005603541 podman[126031]: 2026-01-31 06:56:26.548159411 +0000 UTC m=+0.021061175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:56:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06df26c49f2c8ec3c1b1d4a73475822f2d501fcda75e797fd336e88237f82b21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06df26c49f2c8ec3c1b1d4a73475822f2d501fcda75e797fd336e88237f82b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06df26c49f2c8ec3c1b1d4a73475822f2d501fcda75e797fd336e88237f82b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06df26c49f2c8ec3c1b1d4a73475822f2d501fcda75e797fd336e88237f82b21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:56:26 np0005603541 podman[126031]: 2026-01-31 06:56:26.66991348 +0000 UTC m=+0.142815274 container init 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:56:26 np0005603541 podman[126031]: 2026-01-31 06:56:26.675421024 +0000 UTC m=+0.148322768 container start 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 01:56:26 np0005603541 podman[126031]: 2026-01-31 06:56:26.683389603 +0000 UTC m=+0.156291367 container attach 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:56:26 np0005603541 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 01:56:26 np0005603541 systemd[1]: session-18.scope: Consumed 1min 8.905s CPU time.
Jan 31 01:56:26 np0005603541 systemd-logind[817]: Session 18 logged out. Waiting for processes to exit.
Jan 31 01:56:26 np0005603541 systemd-logind[817]: Removed session 18.
Jan 31 01:56:27 np0005603541 python3.9[126128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:27.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]: {
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:        "osd_id": 0,
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:        "type": "bluestore"
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]:    }
Jan 31 01:56:27 np0005603541 nifty_maxwell[126048]: }
Jan 31 01:56:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:27 np0005603541 systemd[1]: libpod-8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763.scope: Deactivated successfully.
Jan 31 01:56:27 np0005603541 podman[126031]: 2026-01-31 06:56:27.619068464 +0000 UTC m=+1.091970208 container died 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 01:56:27 np0005603541 systemd[1]: var-lib-containers-storage-overlay-06df26c49f2c8ec3c1b1d4a73475822f2d501fcda75e797fd336e88237f82b21-merged.mount: Deactivated successfully.
Jan 31 01:56:27 np0005603541 podman[126031]: 2026-01-31 06:56:27.685380387 +0000 UTC m=+1.158282131 container remove 8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:56:27 np0005603541 systemd[1]: libpod-conmon-8a1f415fd0bd58237038d90a27415bfdcc2b0051888231eb4c972db2f258a763.scope: Deactivated successfully.
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:56:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev bad93dbc-14b4-4c23-8deb-d84170fbd754 does not exist
Jan 31 01:56:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev a19a890c-78ed-4aaf-a619-babca83c365c does not exist
Jan 31 01:56:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 46092619-1e07-43e1-9e3b-cefa82c82fc9 does not exist
Jan 31 01:56:28 np0005603541 python3.9[126360]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCux/eS/9tJWdvcz7CSqzbT3/CFFfMIoClo+OiLmW4DHDCsL7b4Sd8s4ZGetrM/b9d+nZhH3I0np2S0wkbf0kzxDpFnzV/CqSLPcHC1GFG8DlXIWkbbK3H9Nc+il8eG2rceqOXs5LCS6H6lOeSAynOJd7kkW0euL4YtQcqH6/PCpvaHnyAXOL9+76w6apGzrWBRGSKGvwJiCrundYhP4TjMSlb6ITyIdF0bE1617p7zZOh+CQt6wB17bBAKL/ZR7qQsjbIhW1zwJ7R0NuWJrgxemGImJ3YRN+2WJ5UpNJxoMPkwC67IfW4avOTykueyK9cACQ/OLPMvhxBVzsBBfmV7Xl5RquVXDj1OrXfG+zVu5YV0+GEtmxZhptXdzBvMkDBAr3hRB/jE/GZeCx/d6eoA3vfyT7tFrBaunMaiIutt/GbmQBhPSqSrqgau7M8rqs7ocyOCZI3ezwskVMxOX8yCOVAib7rHUkj+I+B48V/7MXiHOkBpOBUmgGSiM2whUe8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiG2htD5mCqa+IIAJsjOKgNJpPNmrlfh2g7QGI6KcQd#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG8QHiFr+d3LEQcNktaGAAZTvvRlNt/N3ZuLInnbRWqbA8w9jqUbMmg6m0Yc2Z+a+4iHrAMgRl5PGiHtvzbSe78=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVmjyOgMrBcNkKRe/3MkTqg/LhVt3sOvBD2IwLvjJmLe3cxmmFlu3iixT4LIzRscHQxUt6EqOuAiYL2BapPTTPjEaB+TseppBVXIPZfjllMgVy8pSqsZa+MUsbI4pONfcoart2REu5ObJIPOSl3YDAkGB+rxeAE1BD+sYmdlKriC/2JkUcS6p03QSjQnukMP476+uzXmPHLvm7A9TJjN2Oa4FkgJFI8+gFZaKPpHzCdoYD8COI0LYpp49uJ0gHQ7E4AepcpNUZXBgEsYKntsF9J/md1b13dW0ucGniV3eVxfWAH3xMRlwfFrT8TB+iQ74ghNmDEY/CCpZwkpL4W6bV7GT4+3nbvWIJv9/dgPSqeunTbbAWPEu6KM0nOuOGVRtQ6+q4aM3TRwV0DUvZptSGhRnHOekdOBRtiuMOnClub09PJMyOr4fKi3e59CfIx36NjxbNZfwA1j9jS3BDHL5BtATwiuTVMUWtdRYUT0h4zdmDtHkVnnPQBm2C3d7o/8c=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHThs9i/0cwyfrem5xVfEov0dwlVT7YQsUAzvhlKxVcU#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPv7c3x32Z77V8zjbPteGtuwIl3HzfI8HP5le/fNUtef+zMbIe6oyaIlzMLTKYnfaTTkKeVwM+hyTawD64NkAc=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7oaNruBF82m85jI32p4Mj+yn4T3FBHQ7cMc6lELq3AspplPtBQsmBgDfhjfVg1I4+kEqlqvMmBXvkZu7SGFPiUPQlioc6MCfPrB8/wSLBG/pEWqlStSpdkbOBEEivzl5kpIYrbNpwH3q/sL6mbZB4fYlpLP6SY4uxDutOWZutUUlzDguTJUprXhv8BnwgqPoBM7wwuPY+U9PSdLY8pxG40xO+UQ9llhK0rTX9Io1k8OtlJeJu/zVCmcEIp7bMmk4GLYHzfhe1JW7+O8RnNxmyEbfEZpJRKD+squSzbEC4jYJSF2ZIG9++KZY33LUAy3Krn46o8Bo+vBJX3HRYdgtGaejzyYimDJ2OPL+UB5K9tTqqKbQlmhZODmFmTVgZabEHzHSuT+dTFBmmzW17ll4cWYHemkonjSM+nl3zO9Quwp+HRmkAa5/uJIFeVLZInx7/aeHCar427H5OnfpuSLc1X9uSNlPAvvIdlXagkfCOLBFXlBSPhkDBqBq9MX7u0ic=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4MRNp0lqMmdnWHkBaN0bYiu3NyVZLTvXbzAb78HL/H#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrqTuBK9SuQu9hS9hBIqRv9weMcR5IS3TOGti2Gz24hxwuCxS2PuVSyWVacVoXmRrXt6Nl3b5KRQ35C6gTvbIU=#012 create=True mode=0644 path=/tmp/ansible.phiqi8um state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:56:28 np0005603541 python3.9[126514]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.phiqi8um' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000043s ======
Jan 31 01:56:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000043s
Jan 31 01:56:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:29 np0005603541 python3.9[126719]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.phiqi8um state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:30 np0005603541 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 01:56:30 np0005603541 systemd[1]: session-41.scope: Consumed 4.397s CPU time.
Jan 31 01:56:30 np0005603541 systemd-logind[817]: Session 41 logged out. Waiting for processes to exit.
Jan 31 01:56:30 np0005603541 systemd-logind[817]: Removed session 41.
Jan 31 01:56:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:31.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:31.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:33.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:35 np0005603541 systemd-logind[817]: New session 42 of user zuul.
Jan 31 01:56:35 np0005603541 systemd[1]: Started Session 42 of User zuul.
Jan 31 01:56:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:36 np0005603541 python3.9[126900]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:37.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:38 np0005603541 python3.9[127057]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 01:56:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:38 np0005603541 python3.9[127211]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:56:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:39 np0005603541 python3.9[127365]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:40 np0005603541 python3.9[127518]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:56:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:41 np0005603541 python3.9[127670]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:41.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:41 np0005603541 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 01:56:41 np0005603541 systemd[1]: session-42.scope: Consumed 3.282s CPU time.
Jan 31 01:56:41 np0005603541 systemd-logind[817]: Session 42 logged out. Waiting for processes to exit.
Jan 31 01:56:41 np0005603541 systemd-logind[817]: Removed session 42.
Jan 31 01:56:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 01:56:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 01:56:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:43.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 01:56:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 01:56:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:45.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:47.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:47 np0005603541 systemd-logind[817]: New session 43 of user zuul.
Jan 31 01:56:47 np0005603541 systemd[1]: Started Session 43 of User zuul.
Jan 31 01:56:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:56:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:56:48 np0005603541 python3.9[127853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:56:49
Jan 31 01:56:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:56:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:56:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'volumes']
Jan 31 01:56:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:56:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:49.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:49.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:49 np0005603541 python3.9[128058]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:56:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:50 np0005603541 python3.9[128144]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 01:56:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:51.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000001s ======
Jan 31 01:56:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:51.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000001s
Jan 31 01:56:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:52 np0005603541 python3.9[128296]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000000s ======
Jan 31 01:56:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:53.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Jan 31 01:56:53 np0005603541 python3.9[128447]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 01:56:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:54 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:54 np0005603541 python3.9[128598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:56:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:56:54 np0005603541 python3.9[128748]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:56:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:55.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:55 np0005603541 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 01:56:55 np0005603541 systemd[1]: session-43.scope: Consumed 4.981s CPU time.
Jan 31 01:56:55 np0005603541 systemd-logind[817]: Session 43 logged out. Waiting for processes to exit.
Jan 31 01:56:55 np0005603541 systemd-logind[817]: Removed session 43.
Jan 31 01:56:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000001s ======
Jan 31 01:56:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:55.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000001s
Jan 31 01:56:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:57.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000001s ======
Jan 31 01:56:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000001s
Jan 31 01:56:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:56:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 01:56:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2409 writes, 10K keys, 2409 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2409 writes, 2409 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2409 writes, 10K keys, 2409 commit groups, 1.0 writes per commit group, ingest: 13.75 MB, 0.02 MB/s#012Interval WAL: 2409 writes, 2409 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     66.9      0.16              0.02         3    0.054       0      0       0.0       0.0#012  L6      1/0    9.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     88.7     81.0      0.21              0.03         2    0.104    8313    818       0.0       0.0#012 Sum      1/0    9.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     49.8     74.8      0.37              0.06         5    0.074    8313    818       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     50.2     75.1      0.37              0.06         4    0.092    8313    818       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     88.7     81.0      0.21              0.03         2    0.104    8313    818       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     67.6      0.16              0.02         2    0.080       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561559fff1f0#2 capacity: 308.00 MB usage: 699.28 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,593.58 KB,0.188203%) FilterBlock(6,34.55 KB,0.0109536%) IndexBlock(6,71.16 KB,0.0225612%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 01:56:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:56:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:56:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:56:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:56:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:56:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:56:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:59.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:00 np0005603541 systemd-logind[817]: New session 44 of user zuul.
Jan 31 01:57:00 np0005603541 systemd[1]: Started Session 44 of User zuul.
Jan 31 01:57:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:01.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000001s ======
Jan 31 01:57:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:01.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000001s
Jan 31 01:57:01 np0005603541 python3.9[128929]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:57:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:03 np0005603541 python3.9[129086]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000001s ======
Jan 31 01:57:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:03.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000001s
Jan 31 01:57:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:04 np0005603541 python3.9[129239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:05 np0005603541 python3.9[129391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:05 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:05 np0005603541 python3.9[129514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842624.2926-155-74023111900010/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=53b9b41b635c2d0a7e34dc1788c7b6d942954fe4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:06 np0005603541 python3.9[129667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:06 np0005603541 python3.9[129790]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842625.7364585-155-208574004764812/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8029520a6e9bb0cd2e43949d17831c57eb8ef4f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:07 np0005603541 python3.9[129942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:07.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:07.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:07 np0005603541 python3.9[130065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842626.7306757-155-46854791486413/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0e7cf6ce4591073fcd32f01e8099f6268f2fd424 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:08 np0005603541 python3.9[130218]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:08 np0005603541 python3.9[130370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.366919) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629366968, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1114, "num_deletes": 252, "total_data_size": 1422321, "memory_usage": 1457176, "flush_reason": "Manual Compaction"}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629393370, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 920801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10285, "largest_seqno": 11398, "table_properties": {"data_size": 916562, "index_size": 1635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12629, "raw_average_key_size": 21, "raw_value_size": 906706, "raw_average_value_size": 1511, "num_data_blocks": 70, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842556, "oldest_key_time": 1769842556, "file_creation_time": 1769842629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 26530 microseconds, and 4111 cpu microseconds.
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.393447) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 920801 bytes OK
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.393477) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.395884) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.395947) EVENT_LOG_v1 {"time_micros": 1769842629395932, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.395980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1417091, prev total WAL file size 1417091, number of live WAL files 2.
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.396662) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(899KB)], [23(9470KB)]
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629396725, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10619076, "oldest_snapshot_seqno": -1}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4298 keys, 8066512 bytes, temperature: kUnknown
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629511742, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 8066512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8035695, "index_size": 18995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106199, "raw_average_key_size": 24, "raw_value_size": 7955653, "raw_average_value_size": 1851, "num_data_blocks": 825, "num_entries": 4298, "num_filter_entries": 4298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769842629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.512017) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 8066512 bytes
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.514505) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.3 rd, 70.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.2 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(20.3) write-amplify(8.8) OK, records in: 4786, records dropped: 488 output_compression: NoCompression
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.514553) EVENT_LOG_v1 {"time_micros": 1769842629514532, "job": 8, "event": "compaction_finished", "compaction_time_micros": 115095, "compaction_time_cpu_micros": 28559, "output_level": 6, "num_output_files": 1, "total_output_size": 8066512, "num_input_records": 4786, "num_output_records": 4298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629514814, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629515939, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.396520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.516062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.516070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.516073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.516076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:57:09.516079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:09 np0005603541 python3.9[130571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:57:10 np0005603541 python3.9[130696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842629.2236912-337-86090799648173/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f15401de6b485d09cbd84c6ad57debd532acae71 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:10 np0005603541 python3.9[130848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:11 np0005603541 python3.9[130971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842630.3808582-337-155185457352093/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0dd65ea80bde2935b665c1b68742c885268ebc5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:11.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:11.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:11 np0005603541 python3.9[131124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:12 np0005603541 python3.9[131247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842631.4652572-337-105613816575897/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ef0aa3d3b95983ca498f8db4e515624f6a58962e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:13 np0005603541 python3.9[131399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:13.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:13 np0005603541 python3.9[131552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:14 np0005603541 python3.9[131704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:14 np0005603541 python3.9[131827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842633.9904323-519-141105354038940/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=db5fecfb4fac424fbc0388aeefd9a31caf7bab32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:15 np0005603541 python3.9[131979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:15 np0005603541 python3.9[132103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842635.0623374-519-223879888019089/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0dd65ea80bde2935b665c1b68742c885268ebc5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:16 np0005603541 python3.9[132255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:17 np0005603541 python3.9[132378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842636.1408808-519-170005859747755/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3bf27b1c128c1bda588d88440dfd3b0a46f09454 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:17.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:19 np0005603541 python3.9[132531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:19.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:20 np0005603541 python3.9[132684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:20 np0005603541 python3.9[132807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842639.5664628-726-82776803078138/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:21.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:21 np0005603541 python3.9[132959]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:22 np0005603541 python3.9[133112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:23.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:23 np0005603541 python3.9[133235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842642.040459-798-193286438481612/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:24 np0005603541 python3.9[133388]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:24 np0005603541 python3.9[133540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:25.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:25.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:25 np0005603541 python3.9[133663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842644.4713595-866-193268111736607/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:26 np0005603541 python3.9[133816]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:27 np0005603541 python3.9[133968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:27 np0005603541 python3.9[134091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842646.6055915-939-119429111466554/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:27.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:28 np0005603541 python3.9[134265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:28 np0005603541 python3.9[134527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:29 np0005603541 python3.9[134650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842648.4262738-1012-144860073275064/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:29 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:29.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 85a1b3a0-5499-4411-85bf-527364319f51 does not exist
Jan 31 01:57:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev da008aed-b05d-4ec5-ac74-f02627f19728 does not exist
Jan 31 01:57:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 8067d7d6-02ca-4914-9d5d-cc7b9a2bd4f0 does not exist
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:57:30 np0005603541 python3.9[134853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:57:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.558352231 +0000 UTC m=+0.067650163 container create 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.513298781 +0000 UTC m=+0.022596733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:30 np0005603541 systemd[1]: Started libpod-conmon-7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01.scope.
Jan 31 01:57:30 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.710846583 +0000 UTC m=+0.220144555 container init 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.718506309 +0000 UTC m=+0.227804251 container start 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 01:57:30 np0005603541 objective_joliot[135163]: 167 167
Jan 31 01:57:30 np0005603541 systemd[1]: libpod-7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01.scope: Deactivated successfully.
Jan 31 01:57:30 np0005603541 python3.9[135146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.83573296 +0000 UTC m=+0.345030912 container attach 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.836529689 +0000 UTC m=+0.345827671 container died 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 01:57:30 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7d478ecb9e40dcde9a343e02f139ce14467985e3c789eee396197f4b2eef3453-merged.mount: Deactivated successfully.
Jan 31 01:57:30 np0005603541 podman[135147]: 2026-01-31 06:57:30.936983171 +0000 UTC m=+0.446281133 container remove 7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:57:30 np0005603541 systemd[1]: libpod-conmon-7a52bc2810511f04802ddf588b99693a02c9526e14bacb480d4d393b69c35d01.scope: Deactivated successfully.
Jan 31 01:57:31 np0005603541 podman[135235]: 2026-01-31 06:57:31.116738239 +0000 UTC m=+0.074181482 container create 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:57:31 np0005603541 podman[135235]: 2026-01-31 06:57:31.074343983 +0000 UTC m=+0.031787226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:31 np0005603541 systemd[1]: Started libpod-conmon-7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40.scope.
Jan 31 01:57:31 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:31 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:31 np0005603541 podman[135235]: 2026-01-31 06:57:31.233562629 +0000 UTC m=+0.191005862 container init 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 31 01:57:31 np0005603541 podman[135235]: 2026-01-31 06:57:31.242828876 +0000 UTC m=+0.200272089 container start 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:57:31 np0005603541 podman[135235]: 2026-01-31 06:57:31.247289595 +0000 UTC m=+0.204732818 container attach 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:57:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:31.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:31.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:31 np0005603541 python3.9[135332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842650.2144895-1084-168172488058496/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:32 np0005603541 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 01:57:32 np0005603541 systemd[1]: session-44.scope: Consumed 19.405s CPU time.
Jan 31 01:57:32 np0005603541 systemd-logind[817]: Session 44 logged out. Waiting for processes to exit.
Jan 31 01:57:32 np0005603541 systemd-logind[817]: Removed session 44.
Jan 31 01:57:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:32 np0005603541 determined_goldwasser[135275]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:57:32 np0005603541 determined_goldwasser[135275]: --> relative data size: 1.0
Jan 31 01:57:32 np0005603541 determined_goldwasser[135275]: --> All data devices are unavailable
Jan 31 01:57:32 np0005603541 systemd[1]: libpod-7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40.scope: Deactivated successfully.
Jan 31 01:57:32 np0005603541 podman[135235]: 2026-01-31 06:57:32.268844896 +0000 UTC m=+1.226288099 container died 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:57:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7c17c78cabfde225b5fd24bdae1620a4e64ba5a270f9ec9f3b669c554025be4d-merged.mount: Deactivated successfully.
Jan 31 01:57:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:32 np0005603541 podman[135235]: 2026-01-31 06:57:32.609850609 +0000 UTC m=+1.567293832 container remove 7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:57:32 np0005603541 systemd[1]: libpod-conmon-7725e90d0ba524793213caae89e6367f8cddeafe31c059897ef60368f8aa5e40.scope: Deactivated successfully.
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.184373941 +0000 UTC m=+0.061923512 container create 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:57:33 np0005603541 systemd[1]: Started libpod-conmon-40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562.scope.
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.14581225 +0000 UTC m=+0.023361801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.297212165 +0000 UTC m=+0.174761716 container init 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.303791536 +0000 UTC m=+0.181341107 container start 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:57:33 np0005603541 fervent_euler[135539]: 167 167
Jan 31 01:57:33 np0005603541 systemd[1]: libpod-40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562.scope: Deactivated successfully.
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.328790975 +0000 UTC m=+0.206340526 container attach 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.329872402 +0000 UTC m=+0.207421983 container died 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:57:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:33.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:33 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d1dae328fdb0c567a564fe63f29cdc918a45c21195010d1aa01c74065a031116-merged.mount: Deactivated successfully.
Jan 31 01:57:33 np0005603541 podman[135523]: 2026-01-31 06:57:33.477213818 +0000 UTC m=+0.354763389 container remove 40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 01:57:33 np0005603541 systemd[1]: libpod-conmon-40e4232d64abc4fea62f987892f842d93ce622d7ff7787b574b293dc3d6fa562.scope: Deactivated successfully.
Jan 31 01:57:33 np0005603541 podman[135563]: 2026-01-31 06:57:33.687242303 +0000 UTC m=+0.085787584 container create 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 01:57:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:33.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:33 np0005603541 podman[135563]: 2026-01-31 06:57:33.628833249 +0000 UTC m=+0.027378620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:33 np0005603541 systemd[1]: Started libpod-conmon-0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6.scope.
Jan 31 01:57:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e087b465593421c54787e5b5c32fd6fc574a9dab84691f6c0f1f2a1ed3c3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e087b465593421c54787e5b5c32fd6fc574a9dab84691f6c0f1f2a1ed3c3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e087b465593421c54787e5b5c32fd6fc574a9dab84691f6c0f1f2a1ed3c3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42e087b465593421c54787e5b5c32fd6fc574a9dab84691f6c0f1f2a1ed3c3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:33 np0005603541 podman[135563]: 2026-01-31 06:57:33.798310724 +0000 UTC m=+0.196856075 container init 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 01:57:33 np0005603541 podman[135563]: 2026-01-31 06:57:33.804573517 +0000 UTC m=+0.203118798 container start 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:57:33 np0005603541 podman[135563]: 2026-01-31 06:57:33.812069861 +0000 UTC m=+0.210615162 container attach 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 01:57:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]: {
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:    "0": [
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:        {
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "devices": [
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "/dev/loop3"
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            ],
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "lv_name": "ceph_lv0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "lv_size": "7511998464",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "name": "ceph_lv0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "tags": {
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.cluster_name": "ceph",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.crush_device_class": "",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.encrypted": "0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.osd_id": "0",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.type": "block",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:                "ceph.vdo": "0"
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            },
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "type": "block",
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:            "vg_name": "ceph_vg0"
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:        }
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]:    ]
Jan 31 01:57:34 np0005603541 compassionate_brown[135582]: }
Jan 31 01:57:34 np0005603541 systemd[1]: libpod-0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6.scope: Deactivated successfully.
Jan 31 01:57:34 np0005603541 podman[135563]: 2026-01-31 06:57:34.608818006 +0000 UTC m=+1.007363297 container died 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 01:57:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b42e087b465593421c54787e5b5c32fd6fc574a9dab84691f6c0f1f2a1ed3c3b-merged.mount: Deactivated successfully.
Jan 31 01:57:34 np0005603541 podman[135563]: 2026-01-31 06:57:34.864287571 +0000 UTC m=+1.262832892 container remove 0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 01:57:34 np0005603541 systemd[1]: libpod-conmon-0d96a24798210661667e214df0742b81d661b88aa2d83edcf058f08ef82dfff6.scope: Deactivated successfully.
Jan 31 01:57:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:35.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.44117372 +0000 UTC m=+0.042532969 container create ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:57:35 np0005603541 systemd[1]: Started libpod-conmon-ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d.scope.
Jan 31 01:57:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.423050698 +0000 UTC m=+0.024409957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.518242141 +0000 UTC m=+0.119601400 container init ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.523669734 +0000 UTC m=+0.125029013 container start ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.527136998 +0000 UTC m=+0.128496277 container attach ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:57:35 np0005603541 musing_thompson[135762]: 167 167
Jan 31 01:57:35 np0005603541 systemd[1]: libpod-ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d.scope: Deactivated successfully.
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.527995799 +0000 UTC m=+0.129355088 container died ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:57:35 np0005603541 systemd[1]: var-lib-containers-storage-overlay-cb4d5c70f1bec9a0c0b5fea7b92f9a35290e85c65d048fa1f0f1a8481695aea2-merged.mount: Deactivated successfully.
Jan 31 01:57:35 np0005603541 podman[135745]: 2026-01-31 06:57:35.575084529 +0000 UTC m=+0.176443768 container remove ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 01:57:35 np0005603541 systemd[1]: libpod-conmon-ba699b7bd366a2306ffb5e57adac976a1f4916a625c5a17be7b6ba85e20b411d.scope: Deactivated successfully.
Jan 31 01:57:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:35 np0005603541 podman[135787]: 2026-01-31 06:57:35.726706349 +0000 UTC m=+0.056465709 container create 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 01:57:35 np0005603541 systemd[1]: Started libpod-conmon-92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f.scope.
Jan 31 01:57:35 np0005603541 podman[135787]: 2026-01-31 06:57:35.694762499 +0000 UTC m=+0.024521869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:57:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:57:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd79758d29549719b13e696c66f241d0496320250dcf81e60190f1e57d3f55ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd79758d29549719b13e696c66f241d0496320250dcf81e60190f1e57d3f55ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd79758d29549719b13e696c66f241d0496320250dcf81e60190f1e57d3f55ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:35 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd79758d29549719b13e696c66f241d0496320250dcf81e60190f1e57d3f55ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:57:35 np0005603541 podman[135787]: 2026-01-31 06:57:35.817695699 +0000 UTC m=+0.147455069 container init 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 01:57:35 np0005603541 podman[135787]: 2026-01-31 06:57:35.823806838 +0000 UTC m=+0.153566188 container start 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:57:35 np0005603541 podman[135787]: 2026-01-31 06:57:35.827487718 +0000 UTC m=+0.157247258 container attach 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 01:57:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]: {
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:        "osd_id": 0,
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:        "type": "bluestore"
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]:    }
Jan 31 01:57:36 np0005603541 vibrant_elbakyan[135804]: }
Jan 31 01:57:36 np0005603541 systemd[1]: libpod-92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f.scope: Deactivated successfully.
Jan 31 01:57:36 np0005603541 podman[135787]: 2026-01-31 06:57:36.715783677 +0000 UTC m=+1.045543067 container died 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:57:36 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fd79758d29549719b13e696c66f241d0496320250dcf81e60190f1e57d3f55ba-merged.mount: Deactivated successfully.
Jan 31 01:57:36 np0005603541 podman[135787]: 2026-01-31 06:57:36.957446396 +0000 UTC m=+1.287205776 container remove 92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elbakyan, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 01:57:36 np0005603541 systemd[1]: libpod-conmon-92ccf06c3b7fbeb2699ddc9cda1a4942cf9f51867fec3a96e6292ff43567654f.scope: Deactivated successfully.
Jan 31 01:57:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:57:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:57:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0cafe2c4-b5c2-49f1-a702-4678221d03ea does not exist
Jan 31 01:57:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 28b6e7f9-447a-4409-bda9-d7f336b393e8 does not exist
Jan 31 01:57:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 97942e79-b18c-4fa2-832b-1fe6a9e89002 does not exist
Jan 31 01:57:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:57:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:38 np0005603541 systemd-logind[817]: New session 45 of user zuul.
Jan 31 01:57:38 np0005603541 systemd[1]: Started Session 45 of User zuul.
Jan 31 01:57:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:39 np0005603541 python3.9[136043]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:39.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:40 np0005603541 python3.9[136196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:40 np0005603541 python3.9[136319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842659.6248703-62-251015859137414/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=6179fb8736d86099e122798f305813e20025174a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:41 np0005603541 python3.9[136471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:57:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:41.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:41 np0005603541 python3.9[136595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842660.9469204-62-115261148663444/.source.conf _original_basename=ceph.conf follow=False checksum=3fbed2da8eef23ae823cb444b6d55e1b9e218e83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:57:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:42 np0005603541 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 01:57:42 np0005603541 systemd[1]: session-45.scope: Consumed 2.320s CPU time.
Jan 31 01:57:42 np0005603541 systemd-logind[817]: Session 45 logged out. Waiting for processes to exit.
Jan 31 01:57:42 np0005603541 systemd-logind[817]: Removed session 45.
Jan 31 01:57:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:45.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:47.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:57:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:57:48 np0005603541 systemd-logind[817]: New session 46 of user zuul.
Jan 31 01:57:48 np0005603541 systemd[1]: Started Session 46 of User zuul.
Jan 31 01:57:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:57:49
Jan 31 01:57:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:57:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:57:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data']
Jan 31 01:57:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:57:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:57:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:49.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:57:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:49.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:49 np0005603541 python3.9[136777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:57:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:50 np0005603541 python3.9[136983]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:51.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:51 np0005603541 python3.9[137135]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:57:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:51.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:52 np0005603541 python3.9[137286]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:57:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:53 np0005603541 python3.9[137438]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 01:57:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:53.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:57:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:57:54 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 01:57:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:55 np0005603541 python3.9[137595]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:57:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:55.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:56 np0005603541 python3.9[137680]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:57:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:57.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:57:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:58 np0005603541 python3.9[137834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 01:57:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:57:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:57:59 np0005603541 python3[137989]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 01:57:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:57:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:57:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:57:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:59.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:57:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:57:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:57:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:59.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:00 np0005603541 python3.9[138142]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:00 np0005603541 python3.9[138294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:01 np0005603541 python3.9[138372]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 01:58:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:01.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 01:58:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:01.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:01 np0005603541 python3.9[138525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:02 np0005603541 python3.9[138603]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w8h0_fl1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:02 np0005603541 python3.9[138755]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:03.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:03 np0005603541 python3.9[138833]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:03.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:04 np0005603541 python3.9[138986]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:04 np0005603541 python3[139139]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 01:58:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:05 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:05.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:05 np0005603541 python3.9[139291]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:05.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:06 np0005603541 python3.9[139417]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842685.1397176-431-137784242698740/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:06 np0005603541 python3.9[139569]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:07 np0005603541 python3.9[139694]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842686.3987198-476-177141621132182/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:07.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:07.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:07 np0005603541 python3.9[139847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:08 np0005603541 python3.9[139972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842687.51619-521-59741153545152/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:09 np0005603541 python3.9[140124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:09 np0005603541 python3.9[140249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842688.6464536-566-195583635964554/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:09.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:58:10 np0005603541 python3.9[140452]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:10 np0005603541 python3.9[140577]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842689.8260322-611-270017420291080/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:11.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:11 np0005603541 python3.9[140729]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:11.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:12 np0005603541 python3.9[140882]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:12 np0005603541 python3.9[141037]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:13 np0005603541 python3.9[141189]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:13.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:14 np0005603541 python3.9[141343]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:58:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:14 np0005603541 python3.9[141497]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:15 np0005603541 python3.9[141652]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:15.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:16 np0005603541 python3.9[141803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:58:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:58:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:17.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:58:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:17.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:17 np0005603541 python3.9[141957]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:17 np0005603541 ovs-vsctl[141958]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:18 np0005603541 python3.9[142110]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:19.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:19 np0005603541 python3.9[142265]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:19 np0005603541 ovs-vsctl[142266]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 01:58:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:19.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:20 np0005603541 python3.9[142417]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:58:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:20 np0005603541 python3.9[142571]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:21 np0005603541 python3.9[142723]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:58:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:21.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:58:22 np0005603541 python3.9[142802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:22 np0005603541 python3.9[142954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:23 np0005603541 python3.9[143032]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:23 np0005603541 python3.9[143184]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:23.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:24 np0005603541 python3.9[143337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:24 np0005603541 python3.9[143415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:25 np0005603541 python3.9[143567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:25 np0005603541 python3.9[143645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:25.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:26 np0005603541 python3.9[143798]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:58:26 np0005603541 systemd[1]: Reloading.
Jan 31 01:58:26 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:26 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:58:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:27 np0005603541 python3.9[143986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:27.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:27 np0005603541 python3.9[144065]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:28 np0005603541 python3.9[144217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:28 np0005603541 python3.9[144295]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:29 np0005603541 python3.9[144447]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:58:29 np0005603541 systemd[1]: Reloading.
Jan 31 01:58:29 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:58:29 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:29.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:29 np0005603541 systemd[1]: Starting Create netns directory...
Jan 31 01:58:29 np0005603541 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 01:58:29 np0005603541 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 01:58:29 np0005603541 systemd[1]: Finished Create netns directory.
Jan 31 01:58:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:30 np0005603541 python3.9[144692]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:31 np0005603541 python3.9[144844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:58:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:58:31 np0005603541 python3.9[144968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842710.9931376-1364-27037216493277/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:33 np0005603541 python3.9[145120]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:33.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:34 np0005603541 python3.9[145273]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:58:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:35 np0005603541 python3.9[145425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:35 np0005603541 python3.9[145548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842714.6041913-1463-197213632997670/.source.json _original_basename=.dxzwf011 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:36 np0005603541 python3.9[145699]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 01:58:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 01:58:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:58:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 python3.9[146243]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 01:58:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 143c06fe-4459-4487-bb9f-65eeeb3412d7 does not exist
Jan 31 01:58:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 4720b490-510a-4d56-94a7-2427b0f3a338 does not exist
Jan 31 01:58:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 756184ec-80bb-4bfc-b048-72089ed8e8f5 does not exist
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:58:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.251848531 +0000 UTC m=+0.037001255 container create 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 01:58:39 np0005603541 systemd[1]: Started libpod-conmon-44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866.scope.
Jan 31 01:58:39 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.315662367 +0000 UTC m=+0.100815111 container init 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.320982597 +0000 UTC m=+0.106135321 container start 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.32356162 +0000 UTC m=+0.108714364 container attach 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:58:39 np0005603541 sweet_galois[146684]: 167 167
Jan 31 01:58:39 np0005603541 systemd[1]: libpod-44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866.scope: Deactivated successfully.
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.324694367 +0000 UTC m=+0.109847091 container died 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.237390937 +0000 UTC m=+0.022543671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:39 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f1288991cf6a85355577d23ba7ddcb457fee88db8f455a2d38acf72fe8dd9341-merged.mount: Deactivated successfully.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:58:39 np0005603541 podman[146668]: 2026-01-31 06:58:39.361197558 +0000 UTC m=+0.146350282 container remove 44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 01:58:39 np0005603541 systemd[1]: libpod-conmon-44cd209489c653168f7c989ab0f7c4971a874eb440e7de455cd605fc316be866.scope: Deactivated successfully.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:39 np0005603541 python3.9[146664]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 01:58:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:39 np0005603541 podman[146711]: 2026-01-31 06:58:39.456529873 +0000 UTC m=+0.033886987 container create 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 31 01:58:39 np0005603541 systemd[1]: Started libpod-conmon-511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a.scope.
Jan 31 01:58:39 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:39 np0005603541 podman[146711]: 2026-01-31 06:58:39.535641713 +0000 UTC m=+0.112998847 container init 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 01:58:39 np0005603541 podman[146711]: 2026-01-31 06:58:39.439656642 +0000 UTC m=+0.017013776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:39 np0005603541 podman[146711]: 2026-01-31 06:58:39.546252002 +0000 UTC m=+0.123609106 container start 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:58:39 np0005603541 podman[146711]: 2026-01-31 06:58:39.549823358 +0000 UTC m=+0.127180492 container attach 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.593815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719593849, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 251, "total_data_size": 1793439, "memory_usage": 1818944, "flush_reason": "Manual Compaction"}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719605096, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1754197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11399, "largest_seqno": 12745, "table_properties": {"data_size": 1748341, "index_size": 2931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14850, "raw_average_key_size": 20, "raw_value_size": 1735467, "raw_average_value_size": 2393, "num_data_blocks": 128, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842629, "oldest_key_time": 1769842629, "file_creation_time": 1769842719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11316 microseconds, and 3415 cpu microseconds.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.605132) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1754197 bytes OK
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.605150) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.606582) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.606621) EVENT_LOG_v1 {"time_micros": 1769842719606616, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.606637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1787359, prev total WAL file size 1787359, number of live WAL files 2.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.607342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1713KB)], [26(7877KB)]
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719607437, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9820709, "oldest_snapshot_seqno": -1}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4504 keys, 7569853 bytes, temperature: kUnknown
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719661326, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7569853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7539213, "index_size": 18306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111873, "raw_average_key_size": 24, "raw_value_size": 7456985, "raw_average_value_size": 1655, "num_data_blocks": 783, "num_entries": 4504, "num_filter_entries": 4504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769842719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.661706) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7569853 bytes
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.665066) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.9 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.3) OK, records in: 5023, records dropped: 519 output_compression: NoCompression
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.665091) EVENT_LOG_v1 {"time_micros": 1769842719665078, "job": 10, "event": "compaction_finished", "compaction_time_micros": 53994, "compaction_time_cpu_micros": 23832, "output_level": 6, "num_output_files": 1, "total_output_size": 7569853, "num_input_records": 5023, "num_output_records": 4504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719665403, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719666246, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.607209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.666375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.666384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.666386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.666387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-06:58:39.666389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 01:58:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:40 np0005603541 elated_franklin[146745]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:58:40 np0005603541 elated_franklin[146745]: --> relative data size: 1.0
Jan 31 01:58:40 np0005603541 elated_franklin[146745]: --> All data devices are unavailable
Jan 31 01:58:40 np0005603541 systemd[1]: libpod-511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a.scope: Deactivated successfully.
Jan 31 01:58:40 np0005603541 podman[146711]: 2026-01-31 06:58:40.277884958 +0000 UTC m=+0.855242092 container died 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:58:40 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d0e397a1c2b83bf1a73ad6fa5807f96639c490637d7b5e35f36f7c70f26d27d5-merged.mount: Deactivated successfully.
Jan 31 01:58:40 np0005603541 podman[146711]: 2026-01-31 06:58:40.32673048 +0000 UTC m=+0.904087584 container remove 511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_franklin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 01:58:40 np0005603541 systemd[1]: libpod-conmon-511a8dce7770c065c394760184aadbc8d22b3ef501ae99720bccaa769a1ed50a.scope: Deactivated successfully.
Jan 31 01:58:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:40 np0005603541 python3[146894]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.822085283 +0000 UTC m=+0.049679072 container create b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:58:40 np0005603541 systemd[1]: Started libpod-conmon-b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2.scope.
Jan 31 01:58:40 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.898082297 +0000 UTC m=+0.125676106 container init b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.803536301 +0000 UTC m=+0.031130110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.906298307 +0000 UTC m=+0.133892116 container start b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:40 np0005603541 goofy_mirzakhani[147090]: 167 167
Jan 31 01:58:40 np0005603541 systemd[1]: libpod-b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2.scope: Deactivated successfully.
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.9150127 +0000 UTC m=+0.142606499 container attach b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.915326168 +0000 UTC m=+0.142919937 container died b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:40 np0005603541 systemd[1]: var-lib-containers-storage-overlay-269af678c5244754ab945c3af748284729db23a3120b6f39205a533c8c0e66c9-merged.mount: Deactivated successfully.
Jan 31 01:58:40 np0005603541 podman[147074]: 2026-01-31 06:58:40.954661447 +0000 UTC m=+0.182255226 container remove b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 01:58:40 np0005603541 systemd[1]: libpod-conmon-b783283881f85fbf1d3716745e4d6ada1d684ed2b021766cb8a69154ed9aa0b2.scope: Deactivated successfully.
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.097165464 +0000 UTC m=+0.051400415 container create 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 31 01:58:41 np0005603541 systemd[1]: Started libpod-conmon-628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1.scope.
Jan 31 01:58:41 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f905cf08ea6ff2f5747738d6656b325d7218e32ebfa3bbc7881fdb923444f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.076167412 +0000 UTC m=+0.030402393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f905cf08ea6ff2f5747738d6656b325d7218e32ebfa3bbc7881fdb923444f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f905cf08ea6ff2f5747738d6656b325d7218e32ebfa3bbc7881fdb923444f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f905cf08ea6ff2f5747738d6656b325d7218e32ebfa3bbc7881fdb923444f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.188135422 +0000 UTC m=+0.142370393 container init 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.198390783 +0000 UTC m=+0.152625744 container start 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.204565163 +0000 UTC m=+0.158800144 container attach 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:41.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]: {
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:    "0": [
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:        {
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "devices": [
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "/dev/loop3"
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            ],
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "lv_name": "ceph_lv0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "lv_size": "7511998464",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "name": "ceph_lv0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "tags": {
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.cluster_name": "ceph",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.crush_device_class": "",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.encrypted": "0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.osd_id": "0",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.type": "block",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:                "ceph.vdo": "0"
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            },
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "type": "block",
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:            "vg_name": "ceph_vg0"
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:        }
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]:    ]
Jan 31 01:58:41 np0005603541 jovial_hertz[147150]: }
Jan 31 01:58:41 np0005603541 systemd[1]: libpod-628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1.scope: Deactivated successfully.
Jan 31 01:58:41 np0005603541 conmon[147150]: conmon 628a6bf8df6c48cd2d1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1.scope/container/memory.events
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.899653209 +0000 UTC m=+0.853888180 container died 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 01:58:41 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6f905cf08ea6ff2f5747738d6656b325d7218e32ebfa3bbc7881fdb923444f28-merged.mount: Deactivated successfully.
Jan 31 01:58:41 np0005603541 podman[147118]: 2026-01-31 06:58:41.947586608 +0000 UTC m=+0.901821569 container remove 628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:58:41 np0005603541 systemd[1]: libpod-conmon-628a6bf8df6c48cd2d1fab24072702db5b837450ba4067c4e6c93eca73b92fa1.scope: Deactivated successfully.
Jan 31 01:58:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:44 np0005603541 podman[147022]: 2026-01-31 06:58:44.803451071 +0000 UTC m=+4.243615776 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.850546819 +0000 UTC m=+0.035818974 container create 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:58:44 np0005603541 systemd[1]: Started libpod-conmon-800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119.scope.
Jan 31 01:58:44 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.921267925 +0000 UTC m=+0.106540120 container init 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 01:58:44 np0005603541 podman[147413]: 2026-01-31 06:58:44.924489783 +0000 UTC m=+0.045919961 container create 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 01:58:44 np0005603541 podman[147413]: 2026-01-31 06:58:44.900329984 +0000 UTC m=+0.021760202 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 01:58:44 np0005603541 python3[146894]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.929788263 +0000 UTC m=+0.115060458 container start 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.833306609 +0000 UTC m=+0.018578784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:44 np0005603541 gracious_driscoll[147419]: 167 167
Jan 31 01:58:44 np0005603541 systemd[1]: libpod-800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119.scope: Deactivated successfully.
Jan 31 01:58:44 np0005603541 conmon[147419]: conmon 800cf0c864740bb141c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119.scope/container/memory.events
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.934395745 +0000 UTC m=+0.119667940 container attach 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.935508982 +0000 UTC m=+0.120781167 container died 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 01:58:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:44 np0005603541 systemd[1]: var-lib-containers-storage-overlay-54c5594f6b46214a9a09282485041c5ba28b247137c5a635d738918337dad695-merged.mount: Deactivated successfully.
Jan 31 01:58:44 np0005603541 podman[147376]: 2026-01-31 06:58:44.980831207 +0000 UTC m=+0.166103372 container remove 800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 01:58:44 np0005603541 systemd[1]: libpod-conmon-800cf0c864740bb141c3f3fc2e6a937395dbaa5c378ae8c086cf4a3e05b50119.scope: Deactivated successfully.
Jan 31 01:58:45 np0005603541 podman[147469]: 2026-01-31 06:58:45.114910048 +0000 UTC m=+0.033511608 container create 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:58:45 np0005603541 systemd[1]: Started libpod-conmon-5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27.scope.
Jan 31 01:58:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330b6f97bf4859d909586a1c0d420bb70d24a2bb4efe9e5fc6d91773ab6a1ea1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330b6f97bf4859d909586a1c0d420bb70d24a2bb4efe9e5fc6d91773ab6a1ea1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330b6f97bf4859d909586a1c0d420bb70d24a2bb4efe9e5fc6d91773ab6a1ea1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330b6f97bf4859d909586a1c0d420bb70d24a2bb4efe9e5fc6d91773ab6a1ea1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:45 np0005603541 podman[147469]: 2026-01-31 06:58:45.191996979 +0000 UTC m=+0.110598589 container init 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 01:58:45 np0005603541 podman[147469]: 2026-01-31 06:58:45.099733398 +0000 UTC m=+0.018334988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:58:45 np0005603541 podman[147469]: 2026-01-31 06:58:45.199057631 +0000 UTC m=+0.117659191 container start 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:58:45 np0005603541 podman[147469]: 2026-01-31 06:58:45.20269054 +0000 UTC m=+0.121292130 container attach 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:45 np0005603541 python3.9[147642]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:58:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:58:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]: {
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:        "osd_id": 0,
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:        "type": "bluestore"
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]:    }
Jan 31 01:58:45 np0005603541 friendly_bassi[147509]: }
Jan 31 01:58:46 np0005603541 systemd[1]: libpod-5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27.scope: Deactivated successfully.
Jan 31 01:58:46 np0005603541 podman[147469]: 2026-01-31 06:58:46.015034715 +0000 UTC m=+0.933636285 container died 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:58:46 np0005603541 systemd[1]: var-lib-containers-storage-overlay-330b6f97bf4859d909586a1c0d420bb70d24a2bb4efe9e5fc6d91773ab6a1ea1-merged.mount: Deactivated successfully.
Jan 31 01:58:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:46 np0005603541 podman[147469]: 2026-01-31 06:58:46.170090998 +0000 UTC m=+1.088692558 container remove 5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bassi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 01:58:46 np0005603541 systemd[1]: libpod-conmon-5f4f70fd92adf6e80aa51611ba37e99b228edc0fe292db7cab5654e9ce8e0d27.scope: Deactivated successfully.
Jan 31 01:58:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 01:58:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 01:58:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0666f910-2609-4a3b-b4f9-4ee7331c3f09 does not exist
Jan 31 01:58:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 87d7cd28-a57a-4531-99be-d86d18702eb9 does not exist
Jan 31 01:58:46 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev bce6aa3b-6a3c-445e-9bf5-2468883835c8 does not exist
Jan 31 01:58:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:46 np0005603541 python3.9[147877]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:47 np0005603541 python3.9[147953]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:58:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:47 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:58:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:47 np0005603541 python3.9[148105]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842727.3811786-1697-141442019389064/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:58:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:48 np0005603541 python3.9[148181]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 01:58:48 np0005603541 systemd[1]: Reloading.
Jan 31 01:58:48 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:48 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:58:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:58:49
Jan 31 01:58:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:58:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:58:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.meta', 'vms']
Jan 31 01:58:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:58:49 np0005603541 python3.9[148293]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:58:49 np0005603541 systemd[1]: Reloading.
Jan 31 01:58:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:49 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:49 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:58:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:49 np0005603541 systemd[1]: Starting ovn_controller container...
Jan 31 01:58:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:58:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833c2c83ec0ec1c968f2fcf351c06f1d4f1a4a266a6d16d4f1799a852f4895cd/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 01:58:49 np0005603541 systemd[1]: Started /usr/bin/podman healthcheck run 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe.
Jan 31 01:58:49 np0005603541 podman[148334]: 2026-01-31 06:58:49.657990949 +0000 UTC m=+0.099914839 container init 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 01:58:49 np0005603541 ovn_controller[148349]: + sudo -E kolla_set_configs
Jan 31 01:58:49 np0005603541 podman[148334]: 2026-01-31 06:58:49.678620782 +0000 UTC m=+0.120544692 container start 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 01:58:49 np0005603541 edpm-start-podman-container[148334]: ovn_controller
Jan 31 01:58:49 np0005603541 systemd[1]: Created slice User Slice of UID 0.
Jan 31 01:58:49 np0005603541 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 01:58:49 np0005603541 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 01:58:49 np0005603541 systemd[1]: Starting User Manager for UID 0...
Jan 31 01:58:49 np0005603541 edpm-start-podman-container[148333]: Creating additional drop-in dependency for "ovn_controller" (55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe)
Jan 31 01:58:49 np0005603541 podman[148357]: 2026-01-31 06:58:49.73432538 +0000 UTC m=+0.049876847 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 01:58:49 np0005603541 systemd[1]: 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe-56989d4140bfa471.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 01:58:49 np0005603541 systemd[1]: 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe-56989d4140bfa471.service: Failed with result 'exit-code'.
Jan 31 01:58:49 np0005603541 systemd[1]: Reloading.
Jan 31 01:58:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:49.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:49 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:49 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:58:49 np0005603541 systemd[148378]: Queued start job for default target Main User Target.
Jan 31 01:58:49 np0005603541 systemd[148378]: Created slice User Application Slice.
Jan 31 01:58:49 np0005603541 systemd[148378]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 01:58:49 np0005603541 systemd[148378]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 01:58:49 np0005603541 systemd[148378]: Reached target Paths.
Jan 31 01:58:49 np0005603541 systemd[148378]: Reached target Timers.
Jan 31 01:58:49 np0005603541 systemd[148378]: Starting D-Bus User Message Bus Socket...
Jan 31 01:58:49 np0005603541 systemd[148378]: Starting Create User's Volatile Files and Directories...
Jan 31 01:58:49 np0005603541 systemd[148378]: Finished Create User's Volatile Files and Directories.
Jan 31 01:58:49 np0005603541 systemd[148378]: Listening on D-Bus User Message Bus Socket.
Jan 31 01:58:49 np0005603541 systemd[148378]: Reached target Sockets.
Jan 31 01:58:49 np0005603541 systemd[148378]: Reached target Basic System.
Jan 31 01:58:49 np0005603541 systemd[148378]: Reached target Main User Target.
Jan 31 01:58:49 np0005603541 systemd[148378]: Startup finished in 126ms.
Jan 31 01:58:49 np0005603541 systemd[1]: Started User Manager for UID 0.
Jan 31 01:58:49 np0005603541 systemd[1]: Started ovn_controller container.
Jan 31 01:58:49 np0005603541 systemd[1]: Started Session c1 of User root.
Jan 31 01:58:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: INFO:__main__:Validating config file
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: INFO:__main__:Writing out command to execute
Jan 31 01:58:50 np0005603541 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: ++ cat /run_command
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + ARGS=
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + sudo kolla_copy_cacerts
Jan 31 01:58:50 np0005603541 systemd[1]: Started Session c2 of User root.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + [[ ! -n '' ]]
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + . kolla_extend_start
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + umask 0022
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 01:58:50 np0005603541 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1328] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1335] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <warn>  [1769842730.1337] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1343] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1348] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1351] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 01:58:50 np0005603541 kernel: br-int: entered promiscuous mode
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 01:58:50 np0005603541 ovn_controller[148349]: 2026-01-31T06:58:50Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1515] manager: (ovn-5c7f3d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 01:58:50 np0005603541 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 01:58:50 np0005603541 systemd-udevd[148531]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1661] device (genev_sys_6081): carrier: link connected
Jan 31 01:58:50 np0005603541 systemd-udevd[148532]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.1663] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.4599] manager: (ovn-facc7c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 01:58:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:50 np0005603541 NetworkManager[48983]: <info>  [1769842730.9665] manager: (ovn-3f1b6d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 01:58:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:51.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:53 np0005603541 python3.9[148662]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 01:58:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:58:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:58:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:58:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:58:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:54 np0005603541 python3.9[148818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:58:55 np0005603541 python3.9[148941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842734.5061176-1832-194642965069758/.source.yaml _original_basename=.eyuhx2ht follow=False checksum=3e5620720bba2617b7a3787a2b0e7617152eaa46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:56 np0005603541 python3.9[149094]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:56 np0005603541 ovs-vsctl[149095]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 01:58:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:56 np0005603541 python3.9[149247]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:56 np0005603541 ovs-vsctl[149249]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 01:58:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:58:57 np0005603541 python3.9[149402]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:57 np0005603541 ovs-vsctl[149403]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 01:58:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:57.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:58 np0005603541 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 01:58:58 np0005603541 systemd[1]: session-46.scope: Consumed 50.479s CPU time.
Jan 31 01:58:58 np0005603541 systemd-logind[817]: Session 46 logged out. Waiting for processes to exit.
Jan 31 01:58:58 np0005603541 systemd-logind[817]: Removed session 46.
Jan 31 01:58:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:58:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:58:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:58:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:58:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:58:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:58:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:58:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:58:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:00 np0005603541 systemd[1]: Stopping User Manager for UID 0...
Jan 31 01:59:00 np0005603541 systemd[148378]: Activating special unit Exit the Session...
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped target Main User Target.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped target Basic System.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped target Paths.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped target Sockets.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped target Timers.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 01:59:00 np0005603541 systemd[148378]: Closed D-Bus User Message Bus Socket.
Jan 31 01:59:00 np0005603541 systemd[148378]: Stopped Create User's Volatile Files and Directories.
Jan 31 01:59:00 np0005603541 systemd[148378]: Removed slice User Application Slice.
Jan 31 01:59:00 np0005603541 systemd[148378]: Reached target Shutdown.
Jan 31 01:59:00 np0005603541 systemd[148378]: Finished Exit the Session.
Jan 31 01:59:00 np0005603541 systemd[148378]: Reached target Exit the Session.
Jan 31 01:59:00 np0005603541 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 01:59:00 np0005603541 systemd[1]: Stopped User Manager for UID 0.
Jan 31 01:59:00 np0005603541 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 01:59:00 np0005603541 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 01:59:00 np0005603541 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 01:59:00 np0005603541 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 01:59:00 np0005603541 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 01:59:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:01.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:03.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:05.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:06 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:06 np0005603541 systemd-logind[817]: New session 48 of user zuul.
Jan 31 01:59:06 np0005603541 systemd[1]: Started Session 48 of User zuul.
Jan 31 01:59:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:07.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 01:59:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7882 writes, 32K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7882 writes, 1442 syncs, 5.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7882 writes, 32K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 20.43 MB, 0.03 MB/s#012Interval WAL: 7882 writes, 1442 syncs, 5.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Jan 31 01:59:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:07.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:07 np0005603541 python3.9[149590]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:59:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:09 np0005603541 python3.9[149748]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:09.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:09 np0005603541 python3.9[149901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 01:59:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:10 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:10 np0005603541 python3.9[150053]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:11 np0005603541 python3.9[150255]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:11 np0005603541 python3.9[150407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:11.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:12 np0005603541 python3.9[150558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:59:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:13 np0005603541 python3.9[150710]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 01:59:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:13.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:14 np0005603541 python3.9[150861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:14 np0005603541 python3.9[150982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842753.7693255-218-156656467968337/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:15.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:15 np0005603541 python3.9[151132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:15.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:15 np0005603541 python3.9[151254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842755.0628748-263-189248100560848/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:16 np0005603541 python3.9[151406]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:59:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:17 np0005603541 python3.9[151490]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:59:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:20 np0005603541 ovn_controller[148349]: 2026-01-31T06:59:20Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Jan 31 01:59:20 np0005603541 ovn_controller[148349]: 2026-01-31T06:59:20Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 31 01:59:20 np0005603541 podman[151518]: 2026-01-31 06:59:20.084918815 +0000 UTC m=+0.113946556 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:59:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:22 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:23 np0005603541 python3.9[151672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 01:59:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:23.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:23.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:24 np0005603541 python3.9[151826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:25 np0005603541 python3.9[151947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842764.291698-374-161806894866028/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:25.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:25 np0005603541 python3.9[152097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:26 np0005603541 python3.9[152219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842765.401444-374-167870725346287/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:27 np0005603541 python3.9[152369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:27 np0005603541 python3.9[152491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842767.021101-506-31146648099761/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:28 np0005603541 python3.9[152641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:28 np0005603541 python3.9[152762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842768.063811-506-178173849684655/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:29 np0005603541 python3.9[152912]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:59:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:29.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:30 np0005603541 python3.9[153067]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:30 np0005603541 python3.9[153219]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:31 np0005603541 python3.9[153347]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:31.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:31 np0005603541 python3.9[153499]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:59:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:59:32 np0005603541 python3.9[153578]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:32 np0005603541 python3.9[153730]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:33 np0005603541 python3.9[153882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:33 np0005603541 python3.9[153960]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:34 np0005603541 python3.9[154113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:34 np0005603541 python3.9[154191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:35 np0005603541 python3.9[154343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:59:35 np0005603541 systemd[1]: Reloading.
Jan 31 01:59:35 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:59:35 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:59:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:35.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:36 np0005603541 python3.9[154532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:37 np0005603541 python3.9[154610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:37 np0005603541 python3.9[154762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:37.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:38 np0005603541 python3.9[154841]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:38 np0005603541 python3.9[154993]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:59:38 np0005603541 systemd[1]: Reloading.
Jan 31 01:59:39 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:59:39 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 01:59:39 np0005603541 systemd[1]: Starting Create netns directory...
Jan 31 01:59:39 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] Check health
Jan 31 01:59:39 np0005603541 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 01:59:39 np0005603541 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 01:59:39 np0005603541 systemd[1]: Finished Create netns directory.
Jan 31 01:59:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:39.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:40 np0005603541 python3.9[155187]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:40 np0005603541 python3.9[155339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:41 np0005603541 python3.9[155462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842780.3063564-959-167031669391288/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:41.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:41.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:42 np0005603541 python3.9[155615]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:43 np0005603541 python3.9[155767]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:43.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:43 np0005603541 python3.9[155920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:44 np0005603541 python3.9[156043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842783.4322214-1058-72258953307318/.source.json _original_basename=.mynb5ma6 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:45 np0005603541 python3.9[156193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:59:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:45.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:59:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:59:47 np0005603541 python3.9[156734]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 01:59:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 01:59:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:47 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev cbb49f9c-c135-40d8-a2c7-bfc38168c6c2 does not exist
Jan 31 01:59:47 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 224d1bf7-422c-4271-94ce-71edebe48fe7 does not exist
Jan 31 01:59:47 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 9e0b01dc-c3fc-4167-a74c-fe538027fad8 does not exist
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 01:59:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 01:59:48 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 01:59:48 np0005603541 python3.9[156954]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 01:59:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.541979251 +0000 UTC m=+0.059167103 container create 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 01:59:48 np0005603541 systemd[1]: Started libpod-conmon-5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52.scope.
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.507933215 +0000 UTC m=+0.025121067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:48 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.640353014 +0000 UTC m=+0.157540906 container init 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.649343014 +0000 UTC m=+0.166530856 container start 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.65447733 +0000 UTC m=+0.171665172 container attach 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 01:59:48 np0005603541 youthful_sammet[157081]: 167 167
Jan 31 01:59:48 np0005603541 systemd[1]: libpod-5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52.scope: Deactivated successfully.
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.657381131 +0000 UTC m=+0.174568943 container died 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:59:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8ab417a048848d72fa3c4a4001adb5d709e1a67428fa66054b1e68eaa12af2aa-merged.mount: Deactivated successfully.
Jan 31 01:59:48 np0005603541 podman[157064]: 2026-01-31 06:59:48.70296626 +0000 UTC m=+0.220154072 container remove 5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:59:48 np0005603541 systemd[1]: libpod-conmon-5c48e0b9dc4ca2222a7c1fa48d537015f7e7f67e2eebc1f20acc1ca476690e52.scope: Deactivated successfully.
Jan 31 01:59:48 np0005603541 podman[157158]: 2026-01-31 06:59:48.851961475 +0000 UTC m=+0.053081323 container create 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 01:59:48 np0005603541 systemd[1]: Started libpod-conmon-63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e.scope.
Jan 31 01:59:48 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:48 np0005603541 podman[157158]: 2026-01-31 06:59:48.834331972 +0000 UTC m=+0.035451830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:48 np0005603541 podman[157158]: 2026-01-31 06:59:48.949043276 +0000 UTC m=+0.150163174 container init 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 01:59:48 np0005603541 podman[157158]: 2026-01-31 06:59:48.962884346 +0000 UTC m=+0.164004234 container start 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:59:48 np0005603541 podman[157158]: 2026-01-31 06:59:48.967212182 +0000 UTC m=+0.168332070 container attach 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 01:59:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_06:59:49
Jan 31 01:59:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 01:59:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 01:59:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 31 01:59:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 01:59:49 np0005603541 python3[157255]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 01:59:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:49.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:49 np0005603541 gallant_bohr[157175]: --> passed data devices: 0 physical, 1 LVM
Jan 31 01:59:49 np0005603541 gallant_bohr[157175]: --> relative data size: 1.0
Jan 31 01:59:49 np0005603541 gallant_bohr[157175]: --> All data devices are unavailable
Jan 31 01:59:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:49 np0005603541 systemd[1]: libpod-63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e.scope: Deactivated successfully.
Jan 31 01:59:49 np0005603541 podman[157158]: 2026-01-31 06:59:49.773202926 +0000 UTC m=+0.974322834 container died 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:59:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-dddbc70b58ed757826a38f3fb1623dcb00e03a8eacf2255c142694eb53825b1b-merged.mount: Deactivated successfully.
Jan 31 01:59:49 np0005603541 podman[157158]: 2026-01-31 06:59:49.838036606 +0000 UTC m=+1.039156464 container remove 63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 01:59:49 np0005603541 systemd[1]: libpod-conmon-63b3deb06da0c3d16d4e1f6269043d614c620db0d4c6a07eeca7aec842d2c39e.scope: Deactivated successfully.
Jan 31 01:59:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:50 np0005603541 podman[157399]: 2026-01-31 06:59:50.274586516 +0000 UTC m=+0.112803649 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 01:59:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.601739102 +0000 UTC m=+0.069272201 container create 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:59:50 np0005603541 systemd[1]: Started libpod-conmon-938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6.scope.
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.569233534 +0000 UTC m=+0.036766653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:50 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.673741228 +0000 UTC m=+0.141274347 container init 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.682086303 +0000 UTC m=+0.149619402 container start 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:59:50 np0005603541 strange_villani[157516]: 167 167
Jan 31 01:59:50 np0005603541 systemd[1]: libpod-938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6.scope: Deactivated successfully.
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.696543957 +0000 UTC m=+0.164077076 container attach 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.696839185 +0000 UTC m=+0.164372284 container died 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:59:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-28b684a8be4c6d5f9e4e5348fd6b46d9884b26bd7d86afb1e6262f8bf0b41166-merged.mount: Deactivated successfully.
Jan 31 01:59:50 np0005603541 podman[157487]: 2026-01-31 06:59:50.743462519 +0000 UTC m=+0.210995618 container remove 938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_villani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 01:59:50 np0005603541 systemd[1]: libpod-conmon-938e55da3a8058ac4778ac23ca9e5af1939b660092829b671f80f47014c557d6.scope: Deactivated successfully.
Jan 31 01:59:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:51.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 01:59:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:53.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 01:59:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 01:59:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 01:59:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 01:59:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:55.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:56 np0005603541 podman[157575]: 2026-01-31 06:59:56.821860547 +0000 UTC m=+5.957131125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:57 np0005603541 podman[157575]: 2026-01-31 06:59:57.021587107 +0000 UTC m=+6.156857675 container create d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 01:59:57 np0005603541 systemd[1]: Started libpod-conmon-d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33.scope.
Jan 31 01:59:57 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d2451778679c9e4683eb08913abadb3a79e128082a5aaaa79ca10cd037d3b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d2451778679c9e4683eb08913abadb3a79e128082a5aaaa79ca10cd037d3b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d2451778679c9e4683eb08913abadb3a79e128082a5aaaa79ca10cd037d3b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d2451778679c9e4683eb08913abadb3a79e128082a5aaaa79ca10cd037d3b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:57 np0005603541 podman[157575]: 2026-01-31 06:59:57.380125732 +0000 UTC m=+6.515396380 container init d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 01:59:57 np0005603541 podman[157575]: 2026-01-31 06:59:57.38857659 +0000 UTC m=+6.523847148 container start d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 01:59:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:57 np0005603541 podman[157575]: 2026-01-31 06:59:57.591275293 +0000 UTC m=+6.726545851 container attach d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:59:57 np0005603541 podman[157268]: 2026-01-31 06:59:57.902232411 +0000 UTC m=+8.501053363 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 01:59:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 01:59:58 np0005603541 podman[157691]: 2026-01-31 06:59:58.021488316 +0000 UTC m=+0.020528224 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]: {
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:    "0": [
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:        {
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "devices": [
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "/dev/loop3"
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            ],
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "lv_name": "ceph_lv0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "lv_size": "7511998464",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "name": "ceph_lv0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "tags": {
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.cephx_lockbox_secret": "",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.cluster_name": "ceph",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.crush_device_class": "",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.encrypted": "0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.osd_id": "0",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.type": "block",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:                "ceph.vdo": "0"
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            },
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "type": "block",
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:            "vg_name": "ceph_vg0"
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:        }
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]:    ]
Jan 31 01:59:58 np0005603541 dreamy_kare[157663]: }
Jan 31 01:59:58 np0005603541 systemd[1]: libpod-d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33.scope: Deactivated successfully.
Jan 31 01:59:58 np0005603541 podman[157575]: 2026-01-31 06:59:58.170691596 +0000 UTC m=+7.305962174 container died d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 01:59:58 np0005603541 podman[157691]: 2026-01-31 06:59:58.397623764 +0000 UTC m=+0.396663642 container create ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Jan 31 01:59:58 np0005603541 python3[157255]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 01:59:58 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e0d2451778679c9e4683eb08913abadb3a79e128082a5aaaa79ca10cd037d3b5-merged.mount: Deactivated successfully.
Jan 31 01:59:58 np0005603541 podman[157575]: 2026-01-31 06:59:58.464982727 +0000 UTC m=+7.600253325 container remove d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 01:59:58 np0005603541 systemd[1]: libpod-conmon-d209f7290ca079881383d9a08631efca7eb90ebfb38ae56690ba7ec4c76dcf33.scope: Deactivated successfully.
Jan 31 01:59:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.120374348 +0000 UTC m=+0.080457814 container create 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.063855459 +0000 UTC m=+0.023938975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:59 np0005603541 systemd[1]: Started libpod-conmon-32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246.scope.
Jan 31 01:59:59 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.242296052 +0000 UTC m=+0.202379508 container init 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.249585769 +0000 UTC m=+0.209669195 container start 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 01:59:59 np0005603541 elegant_wilson[157929]: 167 167
Jan 31 01:59:59 np0005603541 systemd[1]: libpod-32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246.scope: Deactivated successfully.
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.255561975 +0000 UTC m=+0.215645401 container attach 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.256210041 +0000 UTC m=+0.216293477 container died 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 01:59:59 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9ed08434e6e6abc659b279c84ae1b9f0268a20c248c8126e52a57a1224016df8-merged.mount: Deactivated successfully.
Jan 31 01:59:59 np0005603541 podman[157913]: 2026-01-31 06:59:59.314475672 +0000 UTC m=+0.274559098 container remove 32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 01:59:59 np0005603541 systemd[1]: libpod-conmon-32a8a316a6f5ac4d107c18bf07cb6a148c2a026a62801257f3e6892fb1730246.scope: Deactivated successfully.
Jan 31 01:59:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 01:59:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 01:59:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 01:59:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:59.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 01:59:59 np0005603541 podman[157954]: 2026-01-31 06:59:59.537846211 +0000 UTC m=+0.095653834 container create cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 01:59:59 np0005603541 podman[157954]: 2026-01-31 06:59:59.47590049 +0000 UTC m=+0.033708153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 01:59:59 np0005603541 systemd[1]: Started libpod-conmon-cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d.scope.
Jan 31 01:59:59 np0005603541 systemd[1]: Started libcrun container.
Jan 31 01:59:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cbd04e1ffe93e1d2f794335c88130a687520c85e5bfc979ba72f58d7d52e3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cbd04e1ffe93e1d2f794335c88130a687520c85e5bfc979ba72f58d7d52e3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cbd04e1ffe93e1d2f794335c88130a687520c85e5bfc979ba72f58d7d52e3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:59 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6cbd04e1ffe93e1d2f794335c88130a687520c85e5bfc979ba72f58d7d52e3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 01:59:59 np0005603541 podman[157954]: 2026-01-31 06:59:59.696793817 +0000 UTC m=+0.254601430 container init cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 01:59:59 np0005603541 podman[157954]: 2026-01-31 06:59:59.705261854 +0000 UTC m=+0.263069487 container start cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 01:59:59 np0005603541 podman[157954]: 2026-01-31 06:59:59.715085074 +0000 UTC m=+0.272892707 container attach cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 01:59:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 01:59:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 01:59:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 02:00:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 02:00:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:00 np0005603541 bold_swanson[157970]: {
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:        "osd_id": 0,
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:        "type": "bluestore"
Jan 31 02:00:00 np0005603541 bold_swanson[157970]:    }
Jan 31 02:00:00 np0005603541 bold_swanson[157970]: }
Jan 31 02:00:00 np0005603541 systemd[1]: libpod-cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d.scope: Deactivated successfully.
Jan 31 02:00:00 np0005603541 podman[157954]: 2026-01-31 07:00:00.638416355 +0000 UTC m=+1.196224048 container died cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:00:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e6cbd04e1ffe93e1d2f794335c88130a687520c85e5bfc979ba72f58d7d52e3e-merged.mount: Deactivated successfully.
Jan 31 02:00:00 np0005603541 podman[157954]: 2026-01-31 07:00:00.758701629 +0000 UTC m=+1.316509242 container remove cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:00:00 np0005603541 systemd[1]: libpod-conmon-cabc02e0ef8fc8e189e781ae37ce0e3b90434d057cfb5dc55582519cb727085d.scope: Deactivated successfully.
Jan 31 02:00:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:00:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:01.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:03.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:07.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:07.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:09.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:00:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:00:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:09.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:00:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:11.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:00:13 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:00:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev ac340124-ef7d-4d81-8b4e-3a7c4bc74b6e does not exist
Jan 31 02:00:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 30ac7321-05da-4d71-8a5d-cc9d3811d46f does not exist
Jan 31 02:00:13 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev f1636ec0-2996-4e0d-8280-8220de117314 does not exist
Jan 31 02:00:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:14 np0005603541 python3.9[158241]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:00:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:15 np0005603541 python3.9[158395]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:15.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:15 np0005603541 python3.9[158471]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:00:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:16 np0005603541 python3.9[158623]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842815.8124592-1292-209704454167725/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:17 np0005603541 python3.9[158699]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:00:17 np0005603541 systemd[1]: Reloading.
Jan 31 02:00:17 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:17 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:17.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:17 np0005603541 python3.9[158812]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:18 np0005603541 systemd[1]: Reloading.
Jan 31 02:00:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:18 np0005603541 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 02:00:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:00:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/412472e252ad4a624e68772ca2b61e317ada5dd782dd75b98ef92edc5d60d991/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 02:00:18 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/412472e252ad4a624e68772ca2b61e317ada5dd782dd75b98ef92edc5d60d991/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:00:18 np0005603541 systemd[1]: Started /usr/bin/podman healthcheck run ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94.
Jan 31 02:00:18 np0005603541 podman[158853]: 2026-01-31 07:00:18.363646449 +0000 UTC m=+0.115008327 container init ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + sudo -E kolla_set_configs
Jan 31 02:00:18 np0005603541 podman[158853]: 2026-01-31 07:00:18.399007901 +0000 UTC m=+0.150369769 container start ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:00:18 np0005603541 edpm-start-podman-container[158853]: ovn_metadata_agent
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Validating config file
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Copying service configuration files
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Writing out command to execute
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: ++ cat /run_command
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + CMD=neutron-ovn-metadata-agent
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + ARGS=
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + sudo kolla_copy_cacerts
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:18 np0005603541 edpm-start-podman-container[158852]: Creating additional drop-in dependency for "ovn_metadata_agent" (ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94)
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + [[ ! -n '' ]]
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + . kolla_extend_start
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + umask 0022
Jan 31 02:00:18 np0005603541 ovn_metadata_agent[158869]: + exec neutron-ovn-metadata-agent
Jan 31 02:00:18 np0005603541 podman[158876]: 2026-01-31 07:00:18.461647419 +0000 UTC m=+0.055888644 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:00:18 np0005603541 systemd[1]: Reloading.
Jan 31 02:00:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:18 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:18 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:18 np0005603541 systemd[1]: Started ovn_metadata_agent container.
Jan 31 02:00:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:19 np0005603541 python3.9[159109]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 02:00:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:19.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.050 158874 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.050 158874 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.050 158874 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.051 158874 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.052 158874 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.053 158874 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.054 158874 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.055 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.056 158874 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.057 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.058 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.059 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.060 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.061 158874 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.062 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.063 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.064 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.065 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.066 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.067 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.068 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.069 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.070 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.071 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.072 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.073 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.074 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.075 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.076 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.077 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.078 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.079 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.080 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.081 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.082 158874 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.133 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.133 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.133 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.134 158874 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.134 158874 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.149 158874 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e3f3772b-46c1-4a7f-ae43-0efc80b30197 (UUID: e3f3772b-46c1-4a7f-ae43-0efc80b30197) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.220 158874 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.220 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.220 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.220 158874 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.288 158874 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.308 158874 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.317 158874 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e3f3772b-46c1-4a7f-ae43-0efc80b30197'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f45d590d820>], external_ids={}, name=e3f3772b-46c1-4a7f-ae43-0efc80b30197, nb_cfg_timestamp=1769842738153, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.319 158874 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f45d58fcf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.320 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.320 158874 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.320 158874 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.320 158874 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.325 158874 DEBUG oslo_service.service [-] Started child 159135 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.327 159135 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-232412'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.328 158874 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp2t56cfoe/privsep.sock']#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.361 159135 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.362 159135 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.362 159135 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.365 159135 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.372 159135 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:00:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.384 159135 INFO eventlet.wsgi.server [-] (159135) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 31 02:00:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:20 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:20 np0005603541 podman[159239]: 2026-01-31 07:00:20.694445859 +0000 UTC m=+0.078413153 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:00:20 np0005603541 python3.9[159288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:00:20 np0005603541 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.066 158874 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.067 158874 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2t56cfoe/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.905 159297 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.912 159297 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.916 159297 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:20.917 159297 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159297#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.069 159297 DEBUG oslo.privsep.daemon [-] privsep: reply[21518e3b-1f69-46f2-9587-4fbe4fb4ef15]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:00:21 np0005603541 python3.9[159423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842820.3846452-1427-262663627577154/.source.yaml _original_basename=.n6n1f11z follow=False checksum=0432c59ffadcc8d3b3a9efaa1eebb4528ff2936e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.593 159297 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.593 159297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:00:21 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:21.593 159297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:00:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.064 159297 DEBUG oslo.privsep.daemon [-] privsep: reply[39cff8de-ead1-4bf4-9761-7f6e0ae9d4d5]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.067 158874 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e3f3772b-46c1-4a7f-ae43-0efc80b30197, column=external_ids, values=({'neutron:ovn-metadata-id': 'c55be261-2fb0-59ce-8eda-a44ded60ff6c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.074 158874 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e3f3772b-46c1-4a7f-ae43-0efc80b30197, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.080 158874 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.081 158874 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.082 158874 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.083 158874 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.084 158874 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.085 158874 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.086 158874 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.087 158874 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.088 158874 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.089 158874 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.090 158874 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.091 158874 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.092 158874 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.093 158874 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.094 158874 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.095 158874 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.095 158874 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.095 158874 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.095 158874 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.095 158874 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.096 158874 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.097 158874 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.098 158874 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.099 158874 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.100 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.101 158874 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.102 158874 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.103 158874 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.104 158874 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.105 158874 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.106 158874 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.107 158874 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.108 158874 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.109 158874 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.110 158874 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.111 158874 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.112 158874 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.113 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.114 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.115 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.116 158874 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.117 158874 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.117 158874 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.117 158874 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.117 158874 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:00:22 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:00:22.117 158874 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:00:22 np0005603541 systemd-logind[817]: Session 48 logged out. Waiting for processes to exit.
Jan 31 02:00:22 np0005603541 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 02:00:22 np0005603541 systemd[1]: session-48.scope: Consumed 51.667s CPU time.
Jan 31 02:00:22 np0005603541 systemd-logind[817]: Removed session 48.
Jan 31 02:00:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:23 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:23.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:27.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:27 np0005603541 systemd-logind[817]: New session 49 of user zuul.
Jan 31 02:00:27 np0005603541 systemd[1]: Started Session 49 of User zuul.
Jan 31 02:00:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:28 np0005603541 python3.9[159606]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:00:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:29.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:29.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:30 np0005603541 python3.9[159763]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:31 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:31 np0005603541 python3.9[159928]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:00:31 np0005603541 systemd[1]: Reloading.
Jan 31 02:00:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:31 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:31 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:32 np0005603541 python3.9[160114]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:00:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:32 np0005603541 network[160131]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:00:32 np0005603541 network[160132]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:00:32 np0005603541 network[160133]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:00:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:35 np0005603541 python3.9[160446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:36 np0005603541 python3.9[160600]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:37 np0005603541 python3.9[160753]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:37.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:38 np0005603541 python3.9[160907]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:38 np0005603541 python3.9[161060]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:39 np0005603541 python3.9[161213]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:39.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:40 np0005603541 python3.9[161367]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:00:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:41 np0005603541 python3.9[161520]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:41 np0005603541 python3.9[161672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:42 np0005603541 python3.9[161825]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:42 np0005603541 python3.9[161977]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:43 np0005603541 python3.9[162129]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:43.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:43 np0005603541 python3.9[162281]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:44 np0005603541 python3.9[162434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:45 np0005603541 python3.9[162586]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:45.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:45 np0005603541 python3.9[162739]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:46 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:46 np0005603541 python3.9[162891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:46 np0005603541 python3.9[163043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:47 np0005603541 python3.9[163195]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:47.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:47 np0005603541 python3.9[163348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:48 np0005603541 python3.9[163500]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:00:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:49 np0005603541 podman[163580]: 2026-01-31 07:00:49.040717613 +0000 UTC m=+0.081788067 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:00:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:00:49
Jan 31 02:00:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:00:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:00:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Jan 31 02:00:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:00:49 np0005603541 python3.9[163671]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:50 np0005603541 python3.9[163824]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:00:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:50 np0005603541 python3.9[163976]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:00:50 np0005603541 systemd[1]: Reloading.
Jan 31 02:00:51 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:51 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:51 np0005603541 podman[163977]: 2026-01-31 07:00:51.060332294 +0000 UTC m=+0.105971527 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 02:00:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:51.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:51 np0005603541 python3.9[164188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:52 np0005603541 python3.9[164341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:52 np0005603541 python3.9[164494]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:53 np0005603541 python3.9[164647]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:54 np0005603541 python3.9[164801]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:54 np0005603541 python3.9[165004]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:00:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:00:55 np0005603541 python3.9[165157]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:00:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:00:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:56 np0005603541 python3.9[165311]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 02:00:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:57 np0005603541 python3.9[165464]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:00:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:00:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:00:57 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:00:57 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:00:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:00:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:58 np0005603541 python3.9[165624]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:00:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:00:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:00:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:00:59 np0005603541 python3.9[165784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:00:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:00:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:00:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:00:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:00 np0005603541 python3.9[165869]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:01:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:01.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 02:01:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:08 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 02:01:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:09.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:01:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 02:01:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:01:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:01:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 02:01:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:14.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:14 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev f7c7f4f6-48ed-4890-8949-497f7f30e2e2 does not exist
Jan 31 02:01:14 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev a4d542cc-385a-45f5-967a-d21a4befa23f does not exist
Jan 31 02:01:14 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c27ec033-a997-4656-8772-9fbf6d1981ee does not exist
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:01:14 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:15 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.42867153 +0000 UTC m=+0.048256245 container create ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:01:15 np0005603541 systemd[1]: Started libpod-conmon-ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c.scope.
Jan 31 02:01:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.401644147 +0000 UTC m=+0.021228952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.507408953 +0000 UTC m=+0.126993778 container init ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.514936318 +0000 UTC m=+0.134521043 container start ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.518211549 +0000 UTC m=+0.137796284 container attach ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:01:15 np0005603541 frosty_gates[166408]: 167 167
Jan 31 02:01:15 np0005603541 systemd[1]: libpod-ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c.scope: Deactivated successfully.
Jan 31 02:01:15 np0005603541 conmon[166408]: conmon ca7a4899c97b6bd99d0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c.scope/container/memory.events
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.521281114 +0000 UTC m=+0.140865869 container died ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:01:15 np0005603541 systemd[1]: var-lib-containers-storage-overlay-5b508b9ff55f1d2d3fe53f205fa8d553b9679f44ab357627d71877a60d262ab9-merged.mount: Deactivated successfully.
Jan 31 02:01:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:15 np0005603541 podman[166392]: 2026-01-31 07:01:15.608173487 +0000 UTC m=+0.227758212 container remove ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:01:15 np0005603541 systemd[1]: libpod-conmon-ca7a4899c97b6bd99d0e1f3ef80b9c02fc37aee30e9f4e24a606120afee6621c.scope: Deactivated successfully.
Jan 31 02:01:15 np0005603541 podman[166431]: 2026-01-31 07:01:15.724902423 +0000 UTC m=+0.039783028 container create 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:01:15 np0005603541 systemd[1]: Started libpod-conmon-581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c.scope.
Jan 31 02:01:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:15 np0005603541 podman[166431]: 2026-01-31 07:01:15.70850703 +0000 UTC m=+0.023387655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:15 np0005603541 podman[166431]: 2026-01-31 07:01:15.829117851 +0000 UTC m=+0.143998486 container init 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:01:15 np0005603541 podman[166431]: 2026-01-31 07:01:15.837191059 +0000 UTC m=+0.152071674 container start 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:01:15 np0005603541 podman[166431]: 2026-01-31 07:01:15.840477699 +0000 UTC m=+0.155358344 container attach 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:01:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:16.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 02:01:16 np0005603541 wizardly_goldwasser[166447]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:01:16 np0005603541 wizardly_goldwasser[166447]: --> relative data size: 1.0
Jan 31 02:01:16 np0005603541 wizardly_goldwasser[166447]: --> All data devices are unavailable
Jan 31 02:01:16 np0005603541 systemd[1]: libpod-581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c.scope: Deactivated successfully.
Jan 31 02:01:16 np0005603541 podman[166431]: 2026-01-31 07:01:16.609110208 +0000 UTC m=+0.923990813 container died 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:01:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d42bdb8214b9e8a20f5e8c1b75acb302fedfea05a79cd72388c575327212e05b-merged.mount: Deactivated successfully.
Jan 31 02:01:16 np0005603541 podman[166431]: 2026-01-31 07:01:16.681830293 +0000 UTC m=+0.996710938 container remove 581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:01:16 np0005603541 systemd[1]: libpod-conmon-581e0a9e85e72487982478f628cf8ebbd9eced7f8e7445fd53c2612c6c9baa0c.scope: Deactivated successfully.
Jan 31 02:01:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.216456057 +0000 UTC m=+0.055300688 container create d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:01:17 np0005603541 systemd[1]: Started libpod-conmon-d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f.scope.
Jan 31 02:01:17 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.189773192 +0000 UTC m=+0.028617873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.298261905 +0000 UTC m=+0.137106566 container init d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.307768648 +0000 UTC m=+0.146613279 container start d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.312438104 +0000 UTC m=+0.151282905 container attach d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:01:17 np0005603541 brave_matsumoto[166636]: 167 167
Jan 31 02:01:17 np0005603541 systemd[1]: libpod-d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f.scope: Deactivated successfully.
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.314729139 +0000 UTC m=+0.153573770 container died d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:01:17 np0005603541 systemd[1]: var-lib-containers-storage-overlay-192c6a3d7d655120eb6c13262fcbcead9d03208e2d0137aafeed20336f669b6f-merged.mount: Deactivated successfully.
Jan 31 02:01:17 np0005603541 podman[166620]: 2026-01-31 07:01:17.362229216 +0000 UTC m=+0.201073797 container remove d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:01:17 np0005603541 systemd[1]: libpod-conmon-d0a272598cb878bcffa06b023b2612bc87345ee35953de12dc26dd85ccfd1e8f.scope: Deactivated successfully.
Jan 31 02:01:17 np0005603541 podman[166661]: 2026-01-31 07:01:17.48461596 +0000 UTC m=+0.042551956 container create 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:01:17 np0005603541 systemd[1]: Started libpod-conmon-62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c.scope.
Jan 31 02:01:17 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:17 np0005603541 podman[166661]: 2026-01-31 07:01:17.465971702 +0000 UTC m=+0.023907678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:17 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f516d8fa905a7763e804e85f80a3b49201d63500a3e83e64013c59eb8b8340/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:17 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f516d8fa905a7763e804e85f80a3b49201d63500a3e83e64013c59eb8b8340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:17 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f516d8fa905a7763e804e85f80a3b49201d63500a3e83e64013c59eb8b8340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:17 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f516d8fa905a7763e804e85f80a3b49201d63500a3e83e64013c59eb8b8340/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:17 np0005603541 podman[166661]: 2026-01-31 07:01:17.580845682 +0000 UTC m=+0.138781688 container init 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:01:17 np0005603541 podman[166661]: 2026-01-31 07:01:17.5893143 +0000 UTC m=+0.147250276 container start 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:01:17 np0005603541 podman[166661]: 2026-01-31 07:01:17.593479892 +0000 UTC m=+0.151415898 container attach 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:01:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:17.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:18 np0005603541 gracious_carson[166678]: {
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:    "0": [
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:        {
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "devices": [
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "/dev/loop3"
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            ],
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "lv_name": "ceph_lv0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "lv_size": "7511998464",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "name": "ceph_lv0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "tags": {
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.cluster_name": "ceph",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.crush_device_class": "",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.encrypted": "0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.osd_id": "0",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.type": "block",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:                "ceph.vdo": "0"
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            },
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "type": "block",
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:            "vg_name": "ceph_vg0"
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:        }
Jan 31 02:01:18 np0005603541 gracious_carson[166678]:    ]
Jan 31 02:01:18 np0005603541 gracious_carson[166678]: }
Jan 31 02:01:18 np0005603541 systemd[1]: libpod-62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c.scope: Deactivated successfully.
Jan 31 02:01:18 np0005603541 podman[166661]: 2026-01-31 07:01:18.25005806 +0000 UTC m=+0.807994016 container died 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:01:18 np0005603541 systemd[1]: var-lib-containers-storage-overlay-80f516d8fa905a7763e804e85f80a3b49201d63500a3e83e64013c59eb8b8340-merged.mount: Deactivated successfully.
Jan 31 02:01:18 np0005603541 podman[166661]: 2026-01-31 07:01:18.29526448 +0000 UTC m=+0.853200436 container remove 62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:01:18 np0005603541 systemd[1]: libpod-conmon-62fd836f83ab2e4d2643a53f37362c1c2b84b4e6a3235901b116d97783cec33c.scope: Deactivated successfully.
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 02:01:18 np0005603541 podman[166842]: 2026-01-31 07:01:18.940782947 +0000 UTC m=+0.048956053 container create f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:01:18 np0005603541 systemd[1]: Started libpod-conmon-f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139.scope.
Jan 31 02:01:19 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:18.917909524 +0000 UTC m=+0.026082680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:19.018856512 +0000 UTC m=+0.127029678 container init f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:19.024128392 +0000 UTC m=+0.132301468 container start f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:19.027657859 +0000 UTC m=+0.135830965 container attach f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:01:19 np0005603541 laughing_germain[166858]: 167 167
Jan 31 02:01:19 np0005603541 systemd[1]: libpod-f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139.scope: Deactivated successfully.
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:19.029286849 +0000 UTC m=+0.137459985 container died f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:01:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-def083994a616b9743b2ba168b7b2329bfbd7dba81648c4456c06219bad710f4-merged.mount: Deactivated successfully.
Jan 31 02:01:19 np0005603541 podman[166842]: 2026-01-31 07:01:19.071535206 +0000 UTC m=+0.179708302 container remove f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:01:19 np0005603541 systemd[1]: libpod-conmon-f15d998826f59272f986a9ce55e134ea898adf8fb4bb0cec89a70e805a463139.scope: Deactivated successfully.
Jan 31 02:01:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:19 np0005603541 podman[166872]: 2026-01-31 07:01:19.164296343 +0000 UTC m=+0.088185626 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:01:19 np0005603541 podman[166900]: 2026-01-31 07:01:19.213405698 +0000 UTC m=+0.056954418 container create 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:01:19 np0005603541 systemd[1]: Started libpod-conmon-9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e.scope.
Jan 31 02:01:19 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:01:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5db43c7a4f6c1e2647e660c33013ed780628599060ee19c05495d6aed39629a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5db43c7a4f6c1e2647e660c33013ed780628599060ee19c05495d6aed39629a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5db43c7a4f6c1e2647e660c33013ed780628599060ee19c05495d6aed39629a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5db43c7a4f6c1e2647e660c33013ed780628599060ee19c05495d6aed39629a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:01:19 np0005603541 podman[166900]: 2026-01-31 07:01:19.286881192 +0000 UTC m=+0.130429952 container init 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:01:19 np0005603541 podman[166900]: 2026-01-31 07:01:19.198827671 +0000 UTC m=+0.042376411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:01:19 np0005603541 podman[166900]: 2026-01-31 07:01:19.29453123 +0000 UTC m=+0.138079960 container start 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:01:19 np0005603541 podman[166900]: 2026-01-31 07:01:19.313984387 +0000 UTC m=+0.157533107 container attach 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:01:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:19.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:01:20.127 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:01:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:01:20.127 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:01:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:01:20.127 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]: {
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:        "osd_id": 0,
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:        "type": "bluestore"
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]:    }
Jan 31 02:01:20 np0005603541 goofy_chandrasekhar[166917]: }
Jan 31 02:01:20 np0005603541 systemd[1]: libpod-9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e.scope: Deactivated successfully.
Jan 31 02:01:20 np0005603541 podman[166900]: 2026-01-31 07:01:20.166462474 +0000 UTC m=+1.010011234 container died 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:01:20 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f5db43c7a4f6c1e2647e660c33013ed780628599060ee19c05495d6aed39629a-merged.mount: Deactivated successfully.
Jan 31 02:01:20 np0005603541 podman[166900]: 2026-01-31 07:01:20.284248845 +0000 UTC m=+1.127797605 container remove 9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:01:20 np0005603541 systemd[1]: libpod-conmon-9fc7674c4d8941a86399ae9fc059d1127a898caec95a9d65f3041d39e9c5d32e.scope: Deactivated successfully.
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:01:20 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:20 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev ac2eea47-8b4d-45b5-9954-9b18c6dd870e does not exist
Jan 31 02:01:20 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0ccdde3f-3ec5-4d99-96b3-3293bff86b61 does not exist
Jan 31 02:01:20 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 360c4feb-d641-4e47-966d-11f74a446bb9 does not exist
Jan 31 02:01:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 02:01:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:01:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:22.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:22 np0005603541 podman[167003]: 2026-01-31 07:01:22.07641358 +0000 UTC m=+0.110219217 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 31 02:01:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 31 02:01:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:23.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:24.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 31 02:01:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:25 np0005603541 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:01:25 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:01:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:27.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:29 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:30.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:31.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:34 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 02:01:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:34 np0005603541 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:01:34 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:01:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:35.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:36.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:37.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:39.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:41.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:43.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:45.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:47.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:01:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:01:49
Jan 31 02:01:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:01:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:01:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Jan 31 02:01:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:01:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:49 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 02:01:50 np0005603541 podman[170384]: 2026-01-31 07:01:50.02565382 +0000 UTC m=+0.057097523 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:01:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:53 np0005603541 podman[173038]: 2026-01-31 07:01:53.030198436 +0000 UTC m=+0.069938838 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:01:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:01:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:01:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:55.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:56.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:01:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:57.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:01:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:01:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:58.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:01:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:01:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:01:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:01:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:01:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:01:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:59.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:01:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:01:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:00.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:02.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:05.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:06.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:07.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:08.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:10.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:02:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:11.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:12.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:15 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:15.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:02:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:02:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:17.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:18.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:18 np0005603541 kernel: SELinux:  Converting 2781 SID table entries...
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:02:18 np0005603541 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:02:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:19 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 02:02:19 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 02:02:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:19 np0005603541 dbus-broker-launch[807]: Noticed file-system modification, trigger reload.
Jan 31 02:02:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:02:20.127 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:02:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:02:20.128 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:02:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:02:20.129 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:02:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:20.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:20 np0005603541 podman[184172]: 2026-01-31 07:02:20.693688346 +0000 UTC m=+0.091586694 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:02:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:20 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 82c33265-eece-4712-a04d-4ca97946d201 does not exist
Jan 31 02:02:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 85b57066-da87-406c-b049-b0ed5684bbaa does not exist
Jan 31 02:02:21 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 06f1b9f0-7832-4786-aafe-e19a55c9da95 does not exist
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:02:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:02:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:21 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.049531609 +0000 UTC m=+0.044604954 container create 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:02:22 np0005603541 systemd[1]: Started libpod-conmon-11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1.scope.
Jan 31 02:02:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.115888445 +0000 UTC m=+0.110961810 container init 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.02304677 +0000 UTC m=+0.018120145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.122525278 +0000 UTC m=+0.117598623 container start 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:02:22 np0005603541 nostalgic_panini[184520]: 167 167
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.12669454 +0000 UTC m=+0.121767915 container attach 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:02:22 np0005603541 systemd[1]: libpod-11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1.scope: Deactivated successfully.
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.128272998 +0000 UTC m=+0.123346343 container died 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:02:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:22 np0005603541 systemd[1]: var-lib-containers-storage-overlay-310bcdea5d07257983ad8f137258e9895a2d582357d37cc5828087d9e114697a-merged.mount: Deactivated successfully.
Jan 31 02:02:22 np0005603541 podman[184504]: 2026-01-31 07:02:22.171522778 +0000 UTC m=+0.166596123 container remove 11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:02:22 np0005603541 systemd[1]: libpod-conmon-11bb7fae7eb70b5d36504ce33a3195c04d3efe9e65d65ab72a81285b62f443e1.scope: Deactivated successfully.
Jan 31 02:02:22 np0005603541 podman[184545]: 2026-01-31 07:02:22.305533412 +0000 UTC m=+0.043721563 container create cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:02:22 np0005603541 systemd[1]: Started libpod-conmon-cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f.scope.
Jan 31 02:02:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:22 np0005603541 podman[184545]: 2026-01-31 07:02:22.289303144 +0000 UTC m=+0.027491325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:22 np0005603541 podman[184545]: 2026-01-31 07:02:22.390461843 +0000 UTC m=+0.128650014 container init cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:02:22 np0005603541 podman[184545]: 2026-01-31 07:02:22.399044073 +0000 UTC m=+0.137232214 container start cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:02:22 np0005603541 podman[184545]: 2026-01-31 07:02:22.402327934 +0000 UTC m=+0.140516095 container attach cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:02:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:23 np0005603541 flamboyant_brahmagupta[184562]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:02:23 np0005603541 flamboyant_brahmagupta[184562]: --> relative data size: 1.0
Jan 31 02:02:23 np0005603541 flamboyant_brahmagupta[184562]: --> All data devices are unavailable
Jan 31 02:02:23 np0005603541 podman[184644]: 2026-01-31 07:02:23.179440756 +0000 UTC m=+0.072868857 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:02:23 np0005603541 systemd[1]: libpod-cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f.scope: Deactivated successfully.
Jan 31 02:02:23 np0005603541 podman[184683]: 2026-01-31 07:02:23.218666827 +0000 UTC m=+0.021645922 container died cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:02:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-139b6f201b5f341c22d53fa03df8e6331c14d9f3f35ea64220b3f4c1c659dea2-merged.mount: Deactivated successfully.
Jan 31 02:02:23 np0005603541 podman[184683]: 2026-01-31 07:02:23.268455636 +0000 UTC m=+0.071434711 container remove cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:02:23 np0005603541 systemd[1]: libpod-conmon-cf81fac5179fbd36cd10cc0994c78118689d71c0f5e4b94b7eef6a3b9d52158f.scope: Deactivated successfully.
Jan 31 02:02:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.772684891 +0000 UTC m=+0.042209415 container create 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:02:23 np0005603541 systemd[1]: Started libpod-conmon-87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582.scope.
Jan 31 02:02:23 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.751460511 +0000 UTC m=+0.020985075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.858875464 +0000 UTC m=+0.128399998 container init 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.867360921 +0000 UTC m=+0.136885435 container start 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.871121314 +0000 UTC m=+0.140645878 container attach 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:02:23 np0005603541 naughty_satoshi[184939]: 167 167
Jan 31 02:02:23 np0005603541 systemd[1]: libpod-87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582.scope: Deactivated successfully.
Jan 31 02:02:23 np0005603541 conmon[184939]: conmon 87047fcab02276406c3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582.scope/container/memory.events
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.873653416 +0000 UTC m=+0.143177930 container died 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:02:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-42417b4842eb81516f2043dd48ea0a06d24455ec7fbe3257179b5f57aadbe8e2-merged.mount: Deactivated successfully.
Jan 31 02:02:23 np0005603541 podman[184922]: 2026-01-31 07:02:23.933915973 +0000 UTC m=+0.203440497 container remove 87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:02:23 np0005603541 systemd[1]: libpod-conmon-87047fcab02276406c3c904a8e4f74d208c3881a2db333a8c4e85bbe930f2582.scope: Deactivated successfully.
Jan 31 02:02:24 np0005603541 podman[184976]: 2026-01-31 07:02:24.083057047 +0000 UTC m=+0.043866046 container create 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:02:24 np0005603541 systemd[1]: Started libpod-conmon-1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03.scope.
Jan 31 02:02:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:24.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:24 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:24 np0005603541 podman[184976]: 2026-01-31 07:02:24.062834622 +0000 UTC m=+0.023643671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580ce041241c7af719a3cf4a746f83a371b168f193e7c3634ec7a04fe67c402b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580ce041241c7af719a3cf4a746f83a371b168f193e7c3634ec7a04fe67c402b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580ce041241c7af719a3cf4a746f83a371b168f193e7c3634ec7a04fe67c402b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:24 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580ce041241c7af719a3cf4a746f83a371b168f193e7c3634ec7a04fe67c402b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:24 np0005603541 podman[184976]: 2026-01-31 07:02:24.182739869 +0000 UTC m=+0.143548908 container init 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:02:24 np0005603541 podman[184976]: 2026-01-31 07:02:24.192747694 +0000 UTC m=+0.153556733 container start 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:02:24 np0005603541 podman[184976]: 2026-01-31 07:02:24.200453073 +0000 UTC m=+0.161262112 container attach 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:02:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]: {
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:    "0": [
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:        {
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "devices": [
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "/dev/loop3"
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            ],
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "lv_name": "ceph_lv0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "lv_size": "7511998464",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "name": "ceph_lv0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "tags": {
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.cluster_name": "ceph",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.crush_device_class": "",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.encrypted": "0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.osd_id": "0",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.type": "block",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:                "ceph.vdo": "0"
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            },
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "type": "block",
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:            "vg_name": "ceph_vg0"
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:        }
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]:    ]
Jan 31 02:02:24 np0005603541 gallant_mirzakhani[184993]: }
Jan 31 02:02:24 np0005603541 systemd[1]: libpod-1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03.scope: Deactivated successfully.
Jan 31 02:02:25 np0005603541 podman[185002]: 2026-01-31 07:02:25.005558611 +0000 UTC m=+0.040098144 container died 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:02:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-580ce041241c7af719a3cf4a746f83a371b168f193e7c3634ec7a04fe67c402b-merged.mount: Deactivated successfully.
Jan 31 02:02:25 np0005603541 podman[185002]: 2026-01-31 07:02:25.097522955 +0000 UTC m=+0.132062428 container remove 1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:02:25 np0005603541 systemd[1]: libpod-conmon-1131a40e75fd186aefa5a8ad70a42906941cf1507fb3136a6401b3ed401aab03.scope: Deactivated successfully.
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.610882723 +0000 UTC m=+0.048959001 container create da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:02:25 np0005603541 systemd[1]: Started libpod-conmon-da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725.scope.
Jan 31 02:02:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.586790893 +0000 UTC m=+0.024867201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.6960714 +0000 UTC m=+0.134147708 container init da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.701467423 +0000 UTC m=+0.139543701 container start da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:02:25 np0005603541 systemd[1]: libpod-da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725.scope: Deactivated successfully.
Jan 31 02:02:25 np0005603541 modest_mcnulty[185177]: 167 167
Jan 31 02:02:25 np0005603541 conmon[185177]: conmon da1ae728d95e5c646ab9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725.scope/container/memory.events
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.710047453 +0000 UTC m=+0.148123751 container attach da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.710830442 +0000 UTC m=+0.148906720 container died da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:02:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c288e47d34ca6694f4f36728839097077b1e1713a59ae974e03f0cce4230bbf5-merged.mount: Deactivated successfully.
Jan 31 02:02:25 np0005603541 podman[185159]: 2026-01-31 07:02:25.762790385 +0000 UTC m=+0.200866663 container remove da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:02:25 np0005603541 systemd[1]: libpod-conmon-da1ae728d95e5c646ab9936e67b6d4bb7065b61358c58d5428d81aaede66a725.scope: Deactivated successfully.
Jan 31 02:02:25 np0005603541 podman[185246]: 2026-01-31 07:02:25.889999223 +0000 UTC m=+0.043766035 container create 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:02:25 np0005603541 systemd[1]: Started libpod-conmon-32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d.scope.
Jan 31 02:02:25 np0005603541 podman[185246]: 2026-01-31 07:02:25.87155042 +0000 UTC m=+0.025317242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:02:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:02:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb8e0009be513938530bf152eb386dfca0582d2aea3c853b90e2bd4e880f85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb8e0009be513938530bf152eb386dfca0582d2aea3c853b90e2bd4e880f85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb8e0009be513938530bf152eb386dfca0582d2aea3c853b90e2bd4e880f85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:25 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb8e0009be513938530bf152eb386dfca0582d2aea3c853b90e2bd4e880f85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:02:26 np0005603541 podman[185246]: 2026-01-31 07:02:26.005975384 +0000 UTC m=+0.159742236 container init 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:02:26 np0005603541 podman[185246]: 2026-01-31 07:02:26.014694678 +0000 UTC m=+0.168461500 container start 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:02:26 np0005603541 podman[185246]: 2026-01-31 07:02:26.020857459 +0000 UTC m=+0.174624281 container attach 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 02:02:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:26 np0005603541 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 02:02:26 np0005603541 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 02:02:26 np0005603541 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 02:02:26 np0005603541 systemd[1]: sshd.service: Consumed 1.984s CPU time, read 32.0K from disk, written 0B to disk.
Jan 31 02:02:26 np0005603541 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 02:02:26 np0005603541 systemd[1]: Stopping sshd-keygen.target...
Jan 31 02:02:26 np0005603541 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:02:26 np0005603541 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:02:26 np0005603541 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:02:26 np0005603541 systemd[1]: Reached target sshd-keygen.target.
Jan 31 02:02:26 np0005603541 systemd[1]: Starting OpenSSH server daemon...
Jan 31 02:02:26 np0005603541 systemd[1]: Started OpenSSH server daemon.
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]: {
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:        "osd_id": 0,
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:        "type": "bluestore"
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]:    }
Jan 31 02:02:26 np0005603541 dreamy_germain[185338]: }
Jan 31 02:02:26 np0005603541 podman[185246]: 2026-01-31 07:02:26.847504865 +0000 UTC m=+1.001271707 container died 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:02:26 np0005603541 systemd[1]: libpod-32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d.scope: Deactivated successfully.
Jan 31 02:02:26 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b0fb8e0009be513938530bf152eb386dfca0582d2aea3c853b90e2bd4e880f85-merged.mount: Deactivated successfully.
Jan 31 02:02:26 np0005603541 podman[185246]: 2026-01-31 07:02:26.916531706 +0000 UTC m=+1.070298518 container remove 32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_germain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:02:26 np0005603541 systemd[1]: libpod-conmon-32212e434557c14dcfd508ada946437899a1aed555ce1b575954aabfbedeb41d.scope: Deactivated successfully.
Jan 31 02:02:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:02:26 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:02:27 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 6008e645-c4bb-49f7-b9f3-12cac262f9a9 does not exist
Jan 31 02:02:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev b0e0be02-6deb-4017-be32-62bfd3a0013b does not exist
Jan 31 02:02:27 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 5be93b54-6506-4a93-82dc-e95a32684473 does not exist
Jan 31 02:02:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:02:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:28.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:28 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:02:28 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:02:28 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:28 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:28 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:28 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:02:28 np0005603541 auditd[697]: Audit daemon rotating log files
Jan 31 02:02:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:30.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:31.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:34.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:34 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:02:34 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:02:34 np0005603541 systemd[1]: man-db-cache-update.service: Consumed 8.065s CPU time.
Jan 31 02:02:34 np0005603541 systemd[1]: run-ref2cb4c2098f4a5fa7675b7f7610367e.service: Deactivated successfully.
Jan 31 02:02:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:34 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:36.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:02:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:37.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:02:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:38.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:44.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:45 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:45.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:46.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:46 np0005603541 python3.9[194773]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:02:46 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:46 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:46 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:47 np0005603541 python3.9[194963]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:02:47 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:47 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:47 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:47.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:48.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:48 np0005603541 python3.9[195154]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:48 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:02:48 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:48 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:02:49
Jan 31 02:02:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:02:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:02:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', '.mgr', '.rgw.root', 'default.rgw.log']
Jan 31 02:02:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:49 np0005603541 python3.9[195344]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.550174) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969550241, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3072, "num_deletes": 509, "total_data_size": 4398955, "memory_usage": 4484144, "flush_reason": "Manual Compaction"}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969576096, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 4296122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12746, "largest_seqno": 15817, "table_properties": {"data_size": 4283786, "index_size": 7166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 33505, "raw_average_key_size": 20, "raw_value_size": 4254548, "raw_average_value_size": 2614, "num_data_blocks": 312, "num_entries": 1627, "num_filter_entries": 1627, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842720, "oldest_key_time": 1769842720, "file_creation_time": 1769842969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 25969 microseconds, and 6435 cpu microseconds.
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.576151) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 4296122 bytes OK
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.576172) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.577724) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.577740) EVENT_LOG_v1 {"time_micros": 1769842969577736, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.577758) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4385174, prev total WAL file size 4385174, number of live WAL files 2.
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.578744) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(4195KB)], [29(7392KB)]
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969578833, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 11865975, "oldest_snapshot_seqno": -1}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 5096 keys, 9720167 bytes, temperature: kUnknown
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969643681, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 9720167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9684964, "index_size": 21333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 128065, "raw_average_key_size": 25, "raw_value_size": 9591393, "raw_average_value_size": 1882, "num_data_blocks": 889, "num_entries": 5096, "num_filter_entries": 5096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769842969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.644039) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9720167 bytes
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.645491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 7.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 6131, records dropped: 1035 output_compression: NoCompression
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.645522) EVENT_LOG_v1 {"time_micros": 1769842969645506, "job": 12, "event": "compaction_finished", "compaction_time_micros": 65000, "compaction_time_cpu_micros": 16062, "output_level": 6, "num_output_files": 1, "total_output_size": 9720167, "num_input_records": 6131, "num_output_records": 5096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969646566, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969648047, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.578584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.648214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.648221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.648223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.648226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:02:49.648228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:02:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:49.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:50.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:50 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:50 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:50 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:50 np0005603541 podman[195384]: 2026-01-31 07:02:50.95579105 +0000 UTC m=+0.107144317 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:02:51 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:51 np0005603541 python3.9[195553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:51 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:51 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:51 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:52 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:52 np0005603541 python3.9[195743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:52 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:52 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:52 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:53 np0005603541 podman[195904]: 2026-01-31 07:02:53.535300415 +0000 UTC m=+0.128929500 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:02:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:53 np0005603541 python3.9[195945]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:53 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:53 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:53 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:02:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:02:54 np0005603541 python3.9[196151]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:55 np0005603541 python3.9[196356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:55 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:55 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:55 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:02:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:56.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:02:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:56 np0005603541 python3.9[196546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:02:57 np0005603541 systemd[1]: Reloading.
Jan 31 02:02:57 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:57 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:57 np0005603541 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 02:02:57 np0005603541 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 02:02:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:57.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:58 np0005603541 python3.9[196740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:02:59 np0005603541 python3.9[196895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:02:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:02:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:02:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:02:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:02:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:02:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:02:59 np0005603541 python3.9[197050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:00 np0005603541 python3.9[197206]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:01 np0005603541 python3.9[197361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:02.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:02 np0005603541 python3.9[197517]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:03 np0005603541 python3.9[197672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:04 np0005603541 python3.9[197828]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:04 np0005603541 python3.9[197983]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:05 np0005603541 python3.9[198138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:06 np0005603541 python3.9[198294]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:07 np0005603541 python3.9[198449]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:08.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:08 np0005603541 python3.9[198605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:09 np0005603541 python3.9[198760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:03:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:09.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:10 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:03:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:10.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:10 np0005603541 python3.9[198916]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:11 np0005603541 python3.9[199068]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:11 np0005603541 python3.9[199220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:12.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:12 np0005603541 python3.9[199373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:12 np0005603541 python3.9[199525]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:13 np0005603541 python3.9[199677]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:03:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:14 np0005603541 python3.9[199828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:03:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:15 np0005603541 python3.9[200031]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:16 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:16 np0005603541 python3.9[200156]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842995.3283496-1645-93567417797900/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:17 np0005603541 python3.9[200308]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:17 np0005603541 python3.9[200434]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842996.7384422-1645-225389887260379/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:18 np0005603541 python3.9[200586]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:19 np0005603541 python3.9[200711]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842998.1489608-1645-143813248530030/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:19 np0005603541 python3.9[200863]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:03:20.129 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:03:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:03:20.130 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:03:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:03:20.131 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:03:20 np0005603541 python3.9[200989]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842999.2817326-1645-263258285458845/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:20 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:20 np0005603541 python3.9[201141]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:21 np0005603541 podman[201238]: 2026-01-31 07:03:21.231443926 +0000 UTC m=+0.080645850 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:03:21 np0005603541 python3.9[201282]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843000.3604949-1645-143564783106927/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:22 np0005603541 python3.9[201438]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:22 np0005603541 python3.9[201563]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843001.6191554-1645-123771207005076/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:23 np0005603541 python3.9[201715]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:23 np0005603541 podman[201810]: 2026-01-31 07:03:23.680521771 +0000 UTC m=+0.086962317 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 02:03:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:23 np0005603541 python3.9[201858]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843002.8623257-1645-732124404008/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:24.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:24 np0005603541 python3.9[202017]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:24 np0005603541 python3.9[202142]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843003.9298851-1645-215026778036171/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:25 np0005603541 python3.9[202294]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 02:03:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:25.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:26.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:26 np0005603541 python3.9[202448]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:27 np0005603541 python3.9[202600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:27 np0005603541 python3.9[202775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:28 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 8b483ae9-0253-4c60-99f1-d0f5ea31f253 does not exist
Jan 31 02:03:28 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev ffad94cb-35b4-43f7-917a-6a0311df8f93 does not exist
Jan 31 02:03:28 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d0f96b20-57fe-417f-afdb-d679c13ed082 does not exist
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:28 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:03:28 np0005603541 python3.9[203036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:28.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.547885874 +0000 UTC m=+0.038758814 container create 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:03:28 np0005603541 systemd[1]: Started libpod-conmon-9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9.scope.
Jan 31 02:03:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:28 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.626277709 +0000 UTC m=+0.117150659 container init 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.530383783 +0000 UTC m=+0.021256763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.636485239 +0000 UTC m=+0.127358169 container start 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.639918194 +0000 UTC m=+0.130791124 container attach 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:03:28 np0005603541 inspiring_galois[203342]: 167 167
Jan 31 02:03:28 np0005603541 systemd[1]: libpod-9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9.scope: Deactivated successfully.
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.644512056 +0000 UTC m=+0.135384996 container died 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:03:28 np0005603541 systemd[1]: var-lib-containers-storage-overlay-80389611660c7fe89b164001a2394bde68b47b5d9c0b444cb7aaeffe8402c33b-merged.mount: Deactivated successfully.
Jan 31 02:03:28 np0005603541 podman[203279]: 2026-01-31 07:03:28.690228249 +0000 UTC m=+0.181101219 container remove 9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_galois, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:03:28 np0005603541 systemd[1]: libpod-conmon-9a80e536cc61f5db56a7f69db4a6a50180dcf6bac889681ab80fa7a5ca951ae9.scope: Deactivated successfully.
Jan 31 02:03:28 np0005603541 python3.9[203350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:28 np0005603541 podman[203370]: 2026-01-31 07:03:28.845828251 +0000 UTC m=+0.057314159 container create c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:03:28 np0005603541 systemd[1]: Started libpod-conmon-c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3.scope.
Jan 31 02:03:28 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:28 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:28 np0005603541 podman[203370]: 2026-01-31 07:03:28.91502385 +0000 UTC m=+0.126509738 container init c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:03:28 np0005603541 podman[203370]: 2026-01-31 07:03:28.823661016 +0000 UTC m=+0.035146964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:28 np0005603541 podman[203370]: 2026-01-31 07:03:28.923092168 +0000 UTC m=+0.134578036 container start c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:03:28 np0005603541 podman[203370]: 2026-01-31 07:03:28.925768064 +0000 UTC m=+0.137253932 container attach c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:03:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:29 np0005603541 python3.9[203543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:29 np0005603541 brave_banzai[203410]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:03:29 np0005603541 brave_banzai[203410]: --> relative data size: 1.0
Jan 31 02:03:29 np0005603541 brave_banzai[203410]: --> All data devices are unavailable
Jan 31 02:03:29 np0005603541 systemd[1]: libpod-c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3.scope: Deactivated successfully.
Jan 31 02:03:29 np0005603541 podman[203370]: 2026-01-31 07:03:29.634388556 +0000 UTC m=+0.845874424 container died c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:03:29 np0005603541 systemd[1]: var-lib-containers-storage-overlay-01ee90a009395cd7489bb1b88cd72cacd6d0647a0871294d516cfc3e802809a4-merged.mount: Deactivated successfully.
Jan 31 02:03:29 np0005603541 podman[203370]: 2026-01-31 07:03:29.693531778 +0000 UTC m=+0.905017646 container remove c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banzai, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:03:29 np0005603541 systemd[1]: libpod-conmon-c4d3d0dd8a2924eb8f274d6d4a8423deef8d90ba595c95daf547707d38b9c8a3.scope: Deactivated successfully.
Jan 31 02:03:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:29.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:30 np0005603541 python3.9[203755]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.206105626 +0000 UTC m=+0.036858006 container create 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:03:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:30 np0005603541 systemd[1]: Started libpod-conmon-9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74.scope.
Jan 31 02:03:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:30 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.28326916 +0000 UTC m=+0.114021550 container init 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.188842062 +0000 UTC m=+0.019594452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.289569706 +0000 UTC m=+0.120322116 container start 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:03:30 np0005603541 vibrant_archimedes[203954]: 167 167
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.294613839 +0000 UTC m=+0.125366299 container attach 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:03:30 np0005603541 systemd[1]: libpod-9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74.scope: Deactivated successfully.
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.295686016 +0000 UTC m=+0.126438426 container died 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:03:30 np0005603541 systemd[1]: var-lib-containers-storage-overlay-f24ecc0556d4518aec6045b69daae0f57009f68ce7bfaee3e4d304dbd6b90821-merged.mount: Deactivated successfully.
Jan 31 02:03:30 np0005603541 podman[203899]: 2026-01-31 07:03:30.328413989 +0000 UTC m=+0.159166359 container remove 9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:03:30 np0005603541 systemd[1]: libpod-conmon-9730fa56e985610c4dcb2005adbe204140e7d1dcd8c31761c4a9324261404b74.scope: Deactivated successfully.
Jan 31 02:03:30 np0005603541 podman[204030]: 2026-01-31 07:03:30.438454472 +0000 UTC m=+0.034172920 container create ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:03:30 np0005603541 systemd[1]: Started libpod-conmon-ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6.scope.
Jan 31 02:03:30 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60364d3a14134a4ca5e2c381218264d8b797dbfa27f2ef8fa50c0e831fb28cc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60364d3a14134a4ca5e2c381218264d8b797dbfa27f2ef8fa50c0e831fb28cc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60364d3a14134a4ca5e2c381218264d8b797dbfa27f2ef8fa50c0e831fb28cc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:30 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60364d3a14134a4ca5e2c381218264d8b797dbfa27f2ef8fa50c0e831fb28cc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:30 np0005603541 podman[204030]: 2026-01-31 07:03:30.422306516 +0000 UTC m=+0.018024954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:30 np0005603541 podman[204030]: 2026-01-31 07:03:30.520199429 +0000 UTC m=+0.115917837 container init ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:03:30 np0005603541 podman[204030]: 2026-01-31 07:03:30.52551788 +0000 UTC m=+0.121236288 container start ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:03:30 np0005603541 podman[204030]: 2026-01-31 07:03:30.528940704 +0000 UTC m=+0.124659112 container attach ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:03:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:30 np0005603541 python3.9[204068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:31 np0005603541 python3.9[204228]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]: {
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:    "0": [
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:        {
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "devices": [
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "/dev/loop3"
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            ],
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "lv_name": "ceph_lv0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "lv_size": "7511998464",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "name": "ceph_lv0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "tags": {
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.cluster_name": "ceph",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.crush_device_class": "",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.encrypted": "0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.osd_id": "0",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.type": "block",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:                "ceph.vdo": "0"
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            },
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "type": "block",
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:            "vg_name": "ceph_vg0"
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:        }
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]:    ]
Jan 31 02:03:31 np0005603541 hungry_engelbart[204072]: }
Jan 31 02:03:31 np0005603541 systemd[1]: libpod-ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6.scope: Deactivated successfully.
Jan 31 02:03:31 np0005603541 podman[204030]: 2026-01-31 07:03:31.342167845 +0000 UTC m=+0.937886253 container died ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:03:31 np0005603541 systemd[1]: var-lib-containers-storage-overlay-60364d3a14134a4ca5e2c381218264d8b797dbfa27f2ef8fa50c0e831fb28cc4-merged.mount: Deactivated successfully.
Jan 31 02:03:31 np0005603541 podman[204030]: 2026-01-31 07:03:31.393494875 +0000 UTC m=+0.989213283 container remove ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:03:31 np0005603541 systemd[1]: libpod-conmon-ab83997d08e22d8452815fa3ae98edf0b8759954af481e85ea8aaea21ac502e6.scope: Deactivated successfully.
Jan 31 02:03:31 np0005603541 python3.9[204470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.915577837 +0000 UTC m=+0.066745151 container create d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:03:31 np0005603541 systemd[1]: Started libpod-conmon-d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26.scope.
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.867985508 +0000 UTC m=+0.019152862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:31 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.978779069 +0000 UTC m=+0.129946473 container init d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.984632422 +0000 UTC m=+0.135799716 container start d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:03:31 np0005603541 busy_nash[204639]: 167 167
Jan 31 02:03:31 np0005603541 systemd[1]: libpod-d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26.scope: Deactivated successfully.
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.992754092 +0000 UTC m=+0.143921436 container attach d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:03:31 np0005603541 podman[204564]: 2026-01-31 07:03:31.993433929 +0000 UTC m=+0.144601233 container died d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:03:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-0a8aa01faac65c4cc84f946637ffa3a1557ffd9b4d398bcac743e953096c2011-merged.mount: Deactivated successfully.
Jan 31 02:03:32 np0005603541 podman[204564]: 2026-01-31 07:03:32.027918105 +0000 UTC m=+0.179085409 container remove d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:03:32 np0005603541 systemd[1]: libpod-conmon-d23b849345959a8fa3bc840227e7cea252f95e5345c1312558defbeacf1a9e26.scope: Deactivated successfully.
Jan 31 02:03:32 np0005603541 podman[204730]: 2026-01-31 07:03:32.145251917 +0000 UTC m=+0.019991962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:03:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:32.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:32 np0005603541 podman[204730]: 2026-01-31 07:03:32.298390597 +0000 UTC m=+0.173130642 container create 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:03:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:32 np0005603541 python3.9[204727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:32 np0005603541 systemd[1]: Started libpod-conmon-874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2.scope.
Jan 31 02:03:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:03:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb1632547ea9ca5ef5092aee50bf6e0ee34feca142ce8a9e4233f5c4d612eebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb1632547ea9ca5ef5092aee50bf6e0ee34feca142ce8a9e4233f5c4d612eebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb1632547ea9ca5ef5092aee50bf6e0ee34feca142ce8a9e4233f5c4d612eebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb1632547ea9ca5ef5092aee50bf6e0ee34feca142ce8a9e4233f5c4d612eebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:03:32 np0005603541 podman[204730]: 2026-01-31 07:03:32.410916571 +0000 UTC m=+0.285656616 container init 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:03:32 np0005603541 podman[204730]: 2026-01-31 07:03:32.424219828 +0000 UTC m=+0.298959893 container start 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:03:32 np0005603541 podman[204730]: 2026-01-31 07:03:32.428042162 +0000 UTC m=+0.302782187 container attach 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:03:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:32 np0005603541 python3.9[204902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:33 np0005603541 trusting_morse[204750]: {
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:        "osd_id": 0,
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:        "type": "bluestore"
Jan 31 02:03:33 np0005603541 trusting_morse[204750]:    }
Jan 31 02:03:33 np0005603541 trusting_morse[204750]: }
Jan 31 02:03:33 np0005603541 systemd[1]: libpod-874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2.scope: Deactivated successfully.
Jan 31 02:03:33 np0005603541 podman[204730]: 2026-01-31 07:03:33.241803676 +0000 UTC m=+1.116543701 container died 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:03:33 np0005603541 systemd[1]: var-lib-containers-storage-overlay-bb1632547ea9ca5ef5092aee50bf6e0ee34feca142ce8a9e4233f5c4d612eebe-merged.mount: Deactivated successfully.
Jan 31 02:03:33 np0005603541 podman[204730]: 2026-01-31 07:03:33.293956376 +0000 UTC m=+1.168696401 container remove 874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:03:33 np0005603541 systemd[1]: libpod-conmon-874ad51fb4c3b41f8209ad13de7537168d4ff8fe45aa1ce10dad7643dbb0c7f2.scope: Deactivated successfully.
Jan 31 02:03:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:03:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:33 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:33 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:03:33 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:33 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 9af93308-cbe0-428e-85ce-85fbbbb07a4a does not exist
Jan 31 02:03:33 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 4ee524de-26b4-4041-a256-a7b2599efa5a does not exist
Jan 31 02:03:33 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 275e2a02-8218-4cad-8661-2d437042b707 does not exist
Jan 31 02:03:33 np0005603541 python3.9[205077]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:34 np0005603541 python3.9[205283]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:34 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:34 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:03:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:34 np0005603541 python3.9[205435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:35 np0005603541 python3.9[205558]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843014.2076905-2308-200755550880813/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:35 np0005603541 python3.9[205761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:36 np0005603541 python3.9[205884]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843015.423939-2308-183929821126/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:36 np0005603541 python3.9[206036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:37 np0005603541 python3.9[206159]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843016.564463-2308-185750570784767/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:38 np0005603541 python3.9[206312]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:38 np0005603541 python3.9[206435]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843017.6427803-2308-272893825748934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:39 np0005603541 python3.9[206587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:39 np0005603541 python3.9[206710]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843018.816588-2308-277595391922325/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:40 np0005603541 python3.9[206863]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:41 np0005603541 python3.9[206986]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843019.9704723-2308-39673466716852/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:41 np0005603541 python3.9[207138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:42 np0005603541 python3.9[207262]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843021.1475396-2308-63719262269228/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:42 np0005603541 python3.9[207414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:43 np0005603541 python3.9[207537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843022.2774084-2308-50875123820345/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:43.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:43 np0005603541 python3.9[207689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:44.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:44 np0005603541 python3.9[207813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843023.385444-2308-185104451947105/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:44 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:44 np0005603541 python3.9[207965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:45 np0005603541 python3.9[208088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843024.441571-2308-154203507363929/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:45 np0005603541 python3.9[208241]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:46.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:46 np0005603541 python3.9[208364]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843025.5463748-2308-190033317342859/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:46 np0005603541 python3.9[208516]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:47 np0005603541 python3.9[208639]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843026.5589495-2308-259665891837300/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:48 np0005603541 python3.9[208792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:03:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:48.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:03:48 np0005603541 python3.9[208915]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843027.5942614-2308-111198746227269/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:03:49
Jan 31 02:03:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:03:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:03:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr']
Jan 31 02:03:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:03:49 np0005603541 python3.9[209067]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:03:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:49 np0005603541 python3.9[209190]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843028.6889539-2308-45169441209726/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:50 np0005603541 python3.9[209341]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:50.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:51 np0005603541 python3.9[209496]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 02:03:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:51 np0005603541 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 02:03:52 np0005603541 podman[209502]: 2026-01-31 07:03:52.050343274 +0000 UTC m=+0.066095475 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 31 02:03:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:52.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:54 np0005603541 podman[209554]: 2026-01-31 07:03:54.038455429 +0000 UTC m=+0.080837375 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:03:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:54.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:54 np0005603541 python3.9[209699]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:03:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:03:54 np0005603541 python3.9[209851]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:55 np0005603541 python3.9[210053]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:55.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:56 np0005603541 python3.9[210206]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:56 np0005603541 python3.9[210358]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:57 np0005603541 python3.9[210510]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:03:58 np0005603541 python3.9[210663]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:03:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:58.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:03:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:03:58 np0005603541 python3.9[210815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:59 np0005603541 python3.9[210967]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:03:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:03:59 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:03:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:03:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:03:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:00 np0005603541 python3.9[211120]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:01 np0005603541 python3.9[211272]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:04:01 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:01 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:01 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:01 np0005603541 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 02:04:01 np0005603541 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 02:04:01 np0005603541 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 02:04:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:01 np0005603541 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 02:04:01 np0005603541 systemd[1]: Starting libvirt logging daemon...
Jan 31 02:04:01 np0005603541 systemd[1]: Started libvirt logging daemon.
Jan 31 02:04:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:02 np0005603541 python3.9[211466]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:04:02 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:02 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:02 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:02 np0005603541 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 02:04:02 np0005603541 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 02:04:02 np0005603541 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 02:04:02 np0005603541 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 02:04:02 np0005603541 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 02:04:02 np0005603541 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 02:04:02 np0005603541 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 02:04:02 np0005603541 systemd[1]: Started libvirt nodedev daemon.
Jan 31 02:04:03 np0005603541 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 02:04:03 np0005603541 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 02:04:03 np0005603541 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 02:04:03 np0005603541 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 02:04:03 np0005603541 python3.9[211684]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:04:03 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:03 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:03 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:03 np0005603541 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 02:04:03 np0005603541 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 02:04:03 np0005603541 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 02:04:03 np0005603541 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 02:04:03 np0005603541 systemd[1]: Starting libvirt proxy daemon...
Jan 31 02:04:04 np0005603541 systemd[1]: Started libvirt proxy daemon.
Jan 31 02:04:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:04 np0005603541 setroubleshoot[211556]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c13e8b64-da03-4a45-9968-d61da3aa4f75
Jan 31 02:04:04 np0005603541 setroubleshoot[211556]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:04:04 np0005603541 setroubleshoot[211556]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c13e8b64-da03-4a45-9968-d61da3aa4f75
Jan 31 02:04:04 np0005603541 setroubleshoot[211556]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:04:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:04 np0005603541 python3.9[211906]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:04:04 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:04 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:04 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:05 np0005603541 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 02:04:05 np0005603541 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 02:04:05 np0005603541 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 02:04:05 np0005603541 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 02:04:05 np0005603541 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 02:04:05 np0005603541 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 02:04:05 np0005603541 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 02:04:05 np0005603541 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 02:04:05 np0005603541 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 02:04:05 np0005603541 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 02:04:05 np0005603541 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:04:05 np0005603541 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:04:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:05.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:06 np0005603541 python3.9[212121]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:04:06 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:06 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:06 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:06 np0005603541 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 02:04:06 np0005603541 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 02:04:06 np0005603541 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 02:04:06 np0005603541 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 02:04:06 np0005603541 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 02:04:06 np0005603541 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 02:04:06 np0005603541 systemd[1]: Starting libvirt secret daemon...
Jan 31 02:04:06 np0005603541 systemd[1]: Started libvirt secret daemon.
Jan 31 02:04:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:07 np0005603541 python3.9[212333]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:07 np0005603541 python3.9[212486]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:04:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:08 np0005603541 python3.9[212638]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:09 np0005603541 python3.9[212792]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:04:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:04:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:10 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:10 np0005603541 python3.9[212943]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:10 np0005603541 python3.9[213064]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843049.796746-3382-9342902526759/.source.xml follow=False _original_basename=secret.xml.j2 checksum=17d5318e54ac3e2c57aea873011e00a806d508d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:11 np0005603541 python3.9[213216]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine ef73c6e0-6d85-55c2-9347-1f544d3e3d3a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:11.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:12 np0005603541 python3.9[213379]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:14 np0005603541 python3.9[213843]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:14 np0005603541 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 02:04:14 np0005603541 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.009s CPU time.
Jan 31 02:04:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:14 np0005603541 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 02:04:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:15 np0005603541 python3.9[213995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:15 np0005603541 python3.9[214168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843054.5896087-3547-169776490749201/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:16.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:16 np0005603541 python3.9[214321]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:17 np0005603541 python3.9[214473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:17 np0005603541 python3.9[214551]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:18 np0005603541 python3.9[214704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:18 np0005603541 python3.9[214782]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k10rz9lm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:19 np0005603541 python3.9[214934]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:19 np0005603541 python3.9[215012]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:04:20.131 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:04:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:04:20.132 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:04:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:04:20.132 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:04:20 np0005603541 python3.9[215165]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:21 np0005603541 python3[215318]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:04:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:22 np0005603541 podman[215443]: 2026-01-31 07:04:22.210619667 +0000 UTC m=+0.064671009 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:04:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:22.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:22 np0005603541 python3.9[215486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:22 np0005603541 python3.9[215568]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:23 np0005603541 python3.9[215720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:23.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:23 np0005603541 python3.9[215846]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843062.9787483-3814-80946994792789/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:24 np0005603541 podman[215970]: 2026-01-31 07:04:24.596012817 +0000 UTC m=+0.097361481 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:04:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:24 np0005603541 python3.9[216018]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:25 np0005603541 python3.9[216103]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:25 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:25 np0005603541 python3.9[216256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:26 np0005603541 python3.9[216334]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:27 np0005603541 python3.9[216486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:27 np0005603541 python3.9[216611]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843066.5651743-3931-23866967390934/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:28 np0005603541 python3.9[216764]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:29 np0005603541 python3.9[216916]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:29.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:29 np0005603541 python3.9[217072]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:30 np0005603541 python3.9[217224]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:31 np0005603541 python3.9[217377]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:04:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:31 np0005603541 python3.9[217532]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:04:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:32 np0005603541 python3.9[217687]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:33 np0005603541 python3.9[217839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:33.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:33 np0005603541 python3.9[217966]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843072.8828773-4147-224055022446075/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:34 np0005603541 podman[218158]: 2026-01-31 07:04:34.286834823 +0000 UTC m=+0.066664887 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:04:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:34 np0005603541 podman[218158]: 2026-01-31 07:04:34.404086113 +0000 UTC m=+0.183916177 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:04:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:34 np0005603541 podman[218412]: 2026-01-31 07:04:34.902897403 +0000 UTC m=+0.048219345 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 02:04:34 np0005603541 podman[218412]: 2026-01-31 07:04:34.938017436 +0000 UTC m=+0.083339368 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 02:04:35 np0005603541 python3.9[218451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:35 np0005603541 podman[218504]: 2026-01-31 07:04:35.123024129 +0000 UTC m=+0.043731466 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, com.redhat.component=keepalived-container, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Jan 31 02:04:35 np0005603541 podman[218504]: 2026-01-31 07:04:35.134887 +0000 UTC m=+0.055594317 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, version=2.2.4, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:35 np0005603541 python3.9[218784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843074.6561418-4192-22197041908537/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:36 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 77cbe5e6-5c8d-45ad-907d-25e1fd65a675 does not exist
Jan 31 02:04:36 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev bd2d03b2-9289-403d-81fc-28658f587ab2 does not exist
Jan 31 02:04:36 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev f1ca87da-089e-4294-b443-9bc2f0706c4e does not exist
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:04:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:36.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:36 np0005603541 python3.9[219072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:04:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.607963646 +0000 UTC m=+0.053651039 container create 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:04:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:36 np0005603541 systemd[1]: Started libpod-conmon-5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88.scope.
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.583925816 +0000 UTC m=+0.029613289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:36 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.795986934 +0000 UTC m=+0.241674357 container init 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.803395435 +0000 UTC m=+0.249082828 container start 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:04:36 np0005603541 busy_elion[219226]: 167 167
Jan 31 02:04:36 np0005603541 systemd[1]: libpod-5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88.scope: Deactivated successfully.
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.964475541 +0000 UTC m=+0.410162934 container attach 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:04:36 np0005603541 podman[219165]: 2026-01-31 07:04:36.965516527 +0000 UTC m=+0.411203930 container died 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:04:37 np0005603541 python3.9[219276]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843075.9662867-4237-61639412873445/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:04:37 np0005603541 systemd[1]: var-lib-containers-storage-overlay-90a2a887dd6d55af34cecf61c8686d1667c270e044f6a37b372ba58a91fbebe5-merged.mount: Deactivated successfully.
Jan 31 02:04:37 np0005603541 podman[219165]: 2026-01-31 07:04:37.124640204 +0000 UTC m=+0.570327597 container remove 5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:04:37 np0005603541 systemd[1]: libpod-conmon-5e62e83b4e6a7534ac700eaab6b34742e8c03deab1a3b7c546338f51ccf68c88.scope: Deactivated successfully.
Jan 31 02:04:37 np0005603541 podman[219326]: 2026-01-31 07:04:37.229839077 +0000 UTC m=+0.025033945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:37 np0005603541 podman[219326]: 2026-01-31 07:04:37.38958776 +0000 UTC m=+0.184782588 container create 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:04:37 np0005603541 systemd[1]: Started libpod-conmon-7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1.scope.
Jan 31 02:04:37 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:37 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:37 np0005603541 podman[219326]: 2026-01-31 07:04:37.534365576 +0000 UTC m=+0.329560424 container init 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:04:37 np0005603541 podman[219326]: 2026-01-31 07:04:37.544529366 +0000 UTC m=+0.339724194 container start 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:04:37 np0005603541 podman[219326]: 2026-01-31 07:04:37.559289739 +0000 UTC m=+0.354484657 container attach 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:04:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:37 np0005603541 python3.9[219463]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:04:37 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:37.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:37 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:37 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:38 np0005603541 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 02:04:38 np0005603541 loving_panini[219466]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:04:38 np0005603541 loving_panini[219466]: --> relative data size: 1.0
Jan 31 02:04:38 np0005603541 loving_panini[219466]: --> All data devices are unavailable
Jan 31 02:04:38 np0005603541 systemd[1]: libpod-7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1.scope: Deactivated successfully.
Jan 31 02:04:38 np0005603541 podman[219326]: 2026-01-31 07:04:38.354128778 +0000 UTC m=+1.149323596 container died 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:04:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:38 np0005603541 systemd[1]: var-lib-containers-storage-overlay-87a6c11059336bac469bab488f60d9f0b854f55b9f67bfd7d99e97db7f8d365a-merged.mount: Deactivated successfully.
Jan 31 02:04:38 np0005603541 podman[219326]: 2026-01-31 07:04:38.419036932 +0000 UTC m=+1.214231790 container remove 7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:04:38 np0005603541 systemd[1]: libpod-conmon-7af131623d6a9181389e5229fad33074d37112601763af3d6e766c3adabb80c1.scope: Deactivated successfully.
Jan 31 02:04:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:38 np0005603541 python3.9[219735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:04:38 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:39 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:39 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.03303665 +0000 UTC m=+0.035665836 container create 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.018065823 +0000 UTC m=+0.020695029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:39 np0005603541 systemd[1]: Started libpod-conmon-784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b.scope.
Jan 31 02:04:39 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.243423578 +0000 UTC m=+0.246052784 container init 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:04:39 np0005603541 systemd[1]: Reloading.
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.250717587 +0000 UTC m=+0.253346773 container start 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.254753726 +0000 UTC m=+0.257382942 container attach 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:04:39 np0005603541 modest_swirles[219879]: 167 167
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.257693508 +0000 UTC m=+0.260322694 container died 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:04:39 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:39 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:39 np0005603541 systemd[1]: libpod-784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b.scope: Deactivated successfully.
Jan 31 02:04:39 np0005603541 systemd[1]: var-lib-containers-storage-overlay-86b156968f0c72b74c36032e811f651ed771525a7b82f9545f6793c6533525ef-merged.mount: Deactivated successfully.
Jan 31 02:04:39 np0005603541 podman[219854]: 2026-01-31 07:04:39.561233392 +0000 UTC m=+0.563862588 container remove 784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:04:39 np0005603541 systemd[1]: libpod-conmon-784f30966121b11604939603feb32a12e9e5ea8db891f99e25721072740b199b.scope: Deactivated successfully.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.601583) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079601631, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 251, "total_data_size": 2064586, "memory_usage": 2106168, "flush_reason": "Manual Compaction"}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079615733, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2020588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15819, "largest_seqno": 17338, "table_properties": {"data_size": 2014165, "index_size": 3306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15531, "raw_average_key_size": 19, "raw_value_size": 2000004, "raw_average_value_size": 2509, "num_data_blocks": 146, "num_entries": 797, "num_filter_entries": 797, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842969, "oldest_key_time": 1769842969, "file_creation_time": 1769843079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14188 microseconds, and 3495 cpu microseconds.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.615784) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2020588 bytes OK
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.615801) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.619709) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.619723) EVENT_LOG_v1 {"time_micros": 1769843079619719, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.619737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2057876, prev total WAL file size 2073595, number of live WAL files 2.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.622475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1973KB)], [32(9492KB)]
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079622530, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 11740755, "oldest_snapshot_seqno": -1}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5376 keys, 11191971 bytes, temperature: kUnknown
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079695419, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 11191971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11154074, "index_size": 23387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 136253, "raw_average_key_size": 25, "raw_value_size": 11054571, "raw_average_value_size": 2056, "num_data_blocks": 958, "num_entries": 5376, "num_filter_entries": 5376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.695881) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 11191971 bytes
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.697179) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.6 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(11.3) write-amplify(5.5) OK, records in: 5893, records dropped: 517 output_compression: NoCompression
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.697213) EVENT_LOG_v1 {"time_micros": 1769843079697198, "job": 14, "event": "compaction_finished", "compaction_time_micros": 73097, "compaction_time_cpu_micros": 22437, "output_level": 6, "num_output_files": 1, "total_output_size": 11191971, "num_input_records": 5893, "num_output_records": 5376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079697681, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079698877, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.622396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.698939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.698946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.698950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.698953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:39.698956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:39 np0005603541 podman[219965]: 2026-01-31 07:04:39.715222103 +0000 UTC m=+0.050395708 container create a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:04:39 np0005603541 systemd[1]: Started libpod-conmon-a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f.scope.
Jan 31 02:04:39 np0005603541 podman[219965]: 2026-01-31 07:04:39.688968119 +0000 UTC m=+0.024141774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:39 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a436e4ee7524c9a078d43c3b7e242df121a08642edf8541ab95328813445264/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a436e4ee7524c9a078d43c3b7e242df121a08642edf8541ab95328813445264/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a436e4ee7524c9a078d43c3b7e242df121a08642edf8541ab95328813445264/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:39 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a436e4ee7524c9a078d43c3b7e242df121a08642edf8541ab95328813445264/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:39 np0005603541 podman[219965]: 2026-01-31 07:04:39.823681937 +0000 UTC m=+0.158855592 container init a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:39 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:39 np0005603541 podman[219965]: 2026-01-31 07:04:39.839298341 +0000 UTC m=+0.174471966 container start a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:04:39 np0005603541 podman[219965]: 2026-01-31 07:04:39.843652667 +0000 UTC m=+0.178826322 container attach a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:04:40 np0005603541 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 02:04:40 np0005603541 systemd[1]: session-49.scope: Consumed 2min 59.352s CPU time.
Jan 31 02:04:40 np0005603541 systemd-logind[817]: Session 49 logged out. Waiting for processes to exit.
Jan 31 02:04:40 np0005603541 systemd-logind[817]: Removed session 49.
Jan 31 02:04:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]: {
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:    "0": [
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:        {
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "devices": [
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "/dev/loop3"
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            ],
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "lv_name": "ceph_lv0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "lv_size": "7511998464",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "name": "ceph_lv0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "tags": {
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.cluster_name": "ceph",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.crush_device_class": "",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.encrypted": "0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.osd_id": "0",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.type": "block",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:                "ceph.vdo": "0"
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            },
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "type": "block",
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:            "vg_name": "ceph_vg0"
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:        }
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]:    ]
Jan 31 02:04:40 np0005603541 flamboyant_diffie[219981]: }
Jan 31 02:04:40 np0005603541 systemd[1]: libpod-a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f.scope: Deactivated successfully.
Jan 31 02:04:40 np0005603541 podman[219965]: 2026-01-31 07:04:40.591886353 +0000 UTC m=+0.927060028 container died a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:04:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:40 np0005603541 systemd[1]: var-lib-containers-storage-overlay-6a436e4ee7524c9a078d43c3b7e242df121a08642edf8541ab95328813445264-merged.mount: Deactivated successfully.
Jan 31 02:04:40 np0005603541 podman[219965]: 2026-01-31 07:04:40.66307493 +0000 UTC m=+0.998248535 container remove a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:04:40 np0005603541 systemd[1]: libpod-conmon-a070e63a4ca218a41755fa4beb9d932c03b3f84073775d64272e5faacdd6459f.scope: Deactivated successfully.
Jan 31 02:04:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.233755636 +0000 UTC m=+0.040505146 container create 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:04:41 np0005603541 systemd[1]: Started libpod-conmon-2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d.scope.
Jan 31 02:04:41 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.307904666 +0000 UTC m=+0.114654186 container init 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.214328998 +0000 UTC m=+0.021078488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.314222771 +0000 UTC m=+0.120972251 container start 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.318569939 +0000 UTC m=+0.125319459 container attach 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:04:41 np0005603541 fervent_leavitt[220161]: 167 167
Jan 31 02:04:41 np0005603541 systemd[1]: libpod-2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d.scope: Deactivated successfully.
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.321040929 +0000 UTC m=+0.127790409 container died 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:04:41 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a62e5ffcc3ee20c7b6335c00156fc10fe086a51f72f7b672b7917eb223c2a234-merged.mount: Deactivated successfully.
Jan 31 02:04:41 np0005603541 podman[220145]: 2026-01-31 07:04:41.36382166 +0000 UTC m=+0.170571120 container remove 2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:04:41 np0005603541 systemd[1]: libpod-conmon-2c440febfb78e07ed4ffbbacec500729cbf7ad6427261ec44a40d1d42e45ae6d.scope: Deactivated successfully.
Jan 31 02:04:41 np0005603541 podman[220183]: 2026-01-31 07:04:41.513486755 +0000 UTC m=+0.056896078 container create 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:04:41 np0005603541 systemd[1]: Started libpod-conmon-4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05.scope.
Jan 31 02:04:41 np0005603541 podman[220183]: 2026-01-31 07:04:41.4872357 +0000 UTC m=+0.030645083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:04:41 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:04:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac22e7cdba48be92726549763ca20470457b64729fefddf1ead3d5ba2a1534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac22e7cdba48be92726549763ca20470457b64729fefddf1ead3d5ba2a1534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac22e7cdba48be92726549763ca20470457b64729fefddf1ead3d5ba2a1534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:41 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac22e7cdba48be92726549763ca20470457b64729fefddf1ead3d5ba2a1534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:04:41 np0005603541 podman[220183]: 2026-01-31 07:04:41.614917866 +0000 UTC m=+0.158327249 container init 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:04:41 np0005603541 podman[220183]: 2026-01-31 07:04:41.622847191 +0000 UTC m=+0.166256524 container start 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:04:41 np0005603541 podman[220183]: 2026-01-31 07:04:41.627258439 +0000 UTC m=+0.170667822 container attach 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:04:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:42.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]: {
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:        "osd_id": 0,
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:        "type": "bluestore"
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]:    }
Jan 31 02:04:42 np0005603541 sharp_bohr[220199]: }
Jan 31 02:04:42 np0005603541 systemd[1]: libpod-4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05.scope: Deactivated successfully.
Jan 31 02:04:42 np0005603541 podman[220183]: 2026-01-31 07:04:42.488023578 +0000 UTC m=+1.031432871 container died 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:04:42 np0005603541 systemd[1]: var-lib-containers-storage-overlay-89ac22e7cdba48be92726549763ca20470457b64729fefddf1ead3d5ba2a1534-merged.mount: Deactivated successfully.
Jan 31 02:04:42 np0005603541 podman[220183]: 2026-01-31 07:04:42.533865934 +0000 UTC m=+1.077275227 container remove 4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:04:42 np0005603541 systemd[1]: libpod-conmon-4fb7e6a478e5e4b7dc5c5b7f9f0a478fee4e23f488cd6346cdd2dbb5aa75fd05.scope: Deactivated successfully.
Jan 31 02:04:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:04:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:04:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:42 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c1e66000-efbb-44a2-b011-18fa263bb739 does not exist
Jan 31 02:04:42 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 172f0e1e-0c3d-4b46-a6dd-b2abe94c42f8 does not exist
Jan 31 02:04:42 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 2ebe7cb9-bfd5-4ed8-96ff-ffea251493a5 does not exist
Jan 31 02:04:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:43 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:04:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:04:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:04:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:04:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:04:45 np0005603541 systemd-logind[817]: New session 50 of user zuul.
Jan 31 02:04:45 np0005603541 systemd[1]: Started Session 50 of User zuul.
Jan 31 02:04:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:46.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:47 np0005603541 python3.9[220438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:04:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:48 np0005603541 python3.9[220593]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:04:48 np0005603541 network[220610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:04:48 np0005603541 network[220611]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:04:48 np0005603541 network[220612]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:04:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:04:49
Jan 31 02:04:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:04:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:04:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', '.rgw.root']
Jan 31 02:04:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:04:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.991918) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843090991964, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 406, "num_deletes": 251, "total_data_size": 301839, "memory_usage": 310696, "flush_reason": "Manual Compaction"}
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843090995812, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 298528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17339, "largest_seqno": 17744, "table_properties": {"data_size": 296117, "index_size": 511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6229, "raw_average_key_size": 19, "raw_value_size": 291190, "raw_average_value_size": 895, "num_data_blocks": 22, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843079, "oldest_key_time": 1769843079, "file_creation_time": 1769843090, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3968 microseconds, and 1902 cpu microseconds.
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.995881) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 298528 bytes OK
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.995910) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.997396) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.997423) EVENT_LOG_v1 {"time_micros": 1769843090997415, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.997447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 299250, prev total WAL file size 299250, number of live WAL files 2.
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.998058) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(291KB)], [35(10MB)]
Jan 31 02:04:50 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843090998126, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 11490499, "oldest_snapshot_seqno": -1}
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 5188 keys, 9866809 bytes, temperature: kUnknown
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091066098, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9866809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9831097, "index_size": 21650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133095, "raw_average_key_size": 25, "raw_value_size": 9735611, "raw_average_value_size": 1876, "num_data_blocks": 882, "num_entries": 5188, "num_filter_entries": 5188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843090, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.066392) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9866809 bytes
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.067940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.8 rd, 145.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(71.5) write-amplify(33.1) OK, records in: 5701, records dropped: 513 output_compression: NoCompression
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.067967) EVENT_LOG_v1 {"time_micros": 1769843091067953, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68056, "compaction_time_cpu_micros": 27826, "output_level": 6, "num_output_files": 1, "total_output_size": 9866809, "num_input_records": 5701, "num_output_records": 5188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091068119, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091069056, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:50.997970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.069313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.069322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.069326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.069329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:04:51.069332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:04:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:53 np0005603541 podman[220759]: 2026-01-31 07:04:53.023754994 +0000 UTC m=+0.058355695 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:04:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:53 np0005603541 python3.9[220905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:04:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:04:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:53.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:04:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:54 np0005603541 python3.9[220990]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:04:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:55 np0005603541 podman[220992]: 2026-01-31 07:04:55.025306457 +0000 UTC m=+0.066688918 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:04:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:55.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:57.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:04:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:04:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:58.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:04:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:04:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:04:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:04:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:04:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:04:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:04:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:01 np0005603541 python3.9[221223]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:05:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:01.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:02.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:02 np0005603541 python3.9[221376]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:05:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:03 np0005603541 python3.9[221529]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:05:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:04 np0005603541 python3.9[221682]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:05:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:05:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:04.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:05:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:04 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:04 np0005603541 python3.9[221835]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:05:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:05 np0005603541 python3.9[221958]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843104.1641426-245-244648024704091/.source.iscsi _original_basename=.komqwumo follow=False checksum=91fe4a34d25c24a47ebdca56a8e1ae717124e8b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:06 np0005603541 python3.9[222111]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:06 np0005603541 python3.9[222263]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:08 np0005603541 python3.9[222416]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:08 np0005603541 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 02:05:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:08 np0005603541 python3.9[222572]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:09 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:09 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:09 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:09 np0005603541 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:05:09 np0005603541 systemd[1]: Starting Open-iSCSI...
Jan 31 02:05:09 np0005603541 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 02:05:09 np0005603541 systemd[1]: Started Open-iSCSI.
Jan 31 02:05:09 np0005603541 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 02:05:09 np0005603541 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 02:05:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:09 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:09.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:05:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:10 np0005603541 python3.9[222774]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:05:10 np0005603541 network[222791]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:05:10 np0005603541 network[222792]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:05:10 np0005603541 network[222793]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:05:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:11.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:13 np0005603541 python3.9[223066]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:05:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:13.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:14.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:15 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:15 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:05:15 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:05:15 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:16 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:16 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:16 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:05:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:16 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:05:16 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:05:16 np0005603541 systemd[1]: run-rd6ef8857843a4bdcad5d33cc047c49e3.service: Deactivated successfully.
Jan 31 02:05:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:17.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:18.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:18 np0005603541 python3.9[223434]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:05:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:19 np0005603541 python3.9[223586]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 02:05:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:05:20.132 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:05:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:05:20.133 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:05:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:05:20.134 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:05:20 np0005603541 python3.9[223743]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:05:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:20.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:20 np0005603541 python3.9[223866]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843119.611164-509-150952440171225/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:21 np0005603541 python3.9[224018]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:22.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:22 np0005603541 python3.9[224171]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:05:22 np0005603541 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:05:22 np0005603541 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:05:22 np0005603541 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:05:22 np0005603541 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:05:22 np0005603541 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:05:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:23 np0005603541 podman[224299]: 2026-01-31 07:05:23.212670647 +0000 UTC m=+0.062680554 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:05:23 np0005603541 python3.9[224339]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:05:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:23.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:24 np0005603541 python3.9[224500]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:05:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:25 np0005603541 python3.9[224652]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:05:25 np0005603541 podman[224747]: 2026-01-31 07:05:25.568375637 +0000 UTC m=+0.072122275 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 31 02:05:25 np0005603541 python3.9[224797]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843124.719088-662-160906667503667/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:25.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:26 np0005603541 python3.9[224954]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:05:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:27.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:27 np0005603541 python3.9[225108]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:28 np0005603541 python3.9[225260]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:29 np0005603541 python3.9[225412]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:30 np0005603541 python3.9[225565]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:30 np0005603541 python3.9[225717]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:31 np0005603541 python3.9[225869]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:31 np0005603541 python3.9[226022]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:32.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:32 np0005603541 python3.9[226174]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:05:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:33 np0005603541 python3.9[226328]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:05:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:34.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:34 np0005603541 python3.9[226482]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:34 np0005603541 systemd[1]: Listening on multipathd control socket.
Jan 31 02:05:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:35 np0005603541 python3.9[226638]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:35 np0005603541 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 02:05:35 np0005603541 udevadm[226644]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 02:05:35 np0005603541 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 02:05:35 np0005603541 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:05:35 np0005603541 multipathd[226672]: --------start up--------
Jan 31 02:05:35 np0005603541 multipathd[226672]: read /etc/multipath.conf
Jan 31 02:05:35 np0005603541 multipathd[226672]: path checkers start up
Jan 31 02:05:35 np0005603541 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:05:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:35.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:36.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:37 np0005603541 python3.9[226857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:05:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:37 np0005603541 python3.9[227009]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 02:05:37 np0005603541 kernel: Key type psk registered
Jan 31 02:05:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:38 np0005603541 python3.9[227173]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:05:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:38.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:38 np0005603541 python3.9[227296]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843137.9386024-1052-11321470281625/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:39 np0005603541 python3.9[227448]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:39.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:40 np0005603541 python3.9[227601]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:05:40 np0005603541 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:05:40 np0005603541 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:05:40 np0005603541 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:05:40 np0005603541 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:05:40 np0005603541 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:05:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:40.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:41 np0005603541 python3.9[227757]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:05:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:43 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:43 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:43 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:43 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:43.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:43 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:43 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:44 np0005603541 systemd-logind[817]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 02:05:44 np0005603541 lvm[227970]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:05:44 np0005603541 lvm[227970]: VG ceph_vg0 finished
Jan 31 02:05:44 np0005603541 systemd-logind[817]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 02:05:44 np0005603541 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:05:44 np0005603541 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:44 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:44 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:44 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 09045802-01a6-4858-b424-d60c99972f8f does not exist
Jan 31 02:05:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c5702f70-9401-41e9-8f12-c451ec0d4416 does not exist
Jan 31 02:05:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 2e5f32dc-9d99-4fc0-8bf7-64173cce1cdd does not exist
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:44 np0005603541 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:05:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:44.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.088197633 +0000 UTC m=+0.038172411 container create 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:05:45 np0005603541 systemd[1]: Started libpod-conmon-3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677.scope.
Jan 31 02:05:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.162569522 +0000 UTC m=+0.112544330 container init 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.07386929 +0000 UTC m=+0.023844068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.168812765 +0000 UTC m=+0.118787543 container start 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 02:05:45 np0005603541 recursing_faraday[229196]: 167 167
Jan 31 02:05:45 np0005603541 systemd[1]: libpod-3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677.scope: Deactivated successfully.
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.17263818 +0000 UTC m=+0.122612988 container attach 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.173090951 +0000 UTC m=+0.123065729 container died 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:05:45 np0005603541 systemd[1]: var-lib-containers-storage-overlay-07154a3350c724da37010f49bf237809d969f520cf6f6ea8f98c0522465b7047-merged.mount: Deactivated successfully.
Jan 31 02:05:45 np0005603541 podman[229029]: 2026-01-31 07:05:45.212282235 +0000 UTC m=+0.162257013 container remove 3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:05:45 np0005603541 systemd[1]: libpod-conmon-3880fdae89c2afd14232fc52ab8ae9dd40d4c2f8f75e44433b518d76c96a9677.scope: Deactivated successfully.
Jan 31 02:05:45 np0005603541 podman[229405]: 2026-01-31 07:05:45.332608085 +0000 UTC m=+0.034447147 container create d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:05:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:05:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:45 np0005603541 systemd[1]: Started libpod-conmon-d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2.scope.
Jan 31 02:05:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:45 np0005603541 podman[229405]: 2026-01-31 07:05:45.316747496 +0000 UTC m=+0.018586578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:45 np0005603541 podman[229405]: 2026-01-31 07:05:45.42789079 +0000 UTC m=+0.129729872 container init d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:05:45 np0005603541 podman[229405]: 2026-01-31 07:05:45.436919122 +0000 UTC m=+0.138758184 container start d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:05:45 np0005603541 podman[229405]: 2026-01-31 07:05:45.441353371 +0000 UTC m=+0.143192433 container attach d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:45 np0005603541 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:05:45 np0005603541 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:05:45 np0005603541 systemd[1]: man-db-cache-update.service: Consumed 1.093s CPU time.
Jan 31 02:05:45 np0005603541 systemd[1]: run-r637caaa4fff74716b3397b657559402c.service: Deactivated successfully.
Jan 31 02:05:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:45.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:46 np0005603541 frosty_chatterjee[229422]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:05:46 np0005603541 frosty_chatterjee[229422]: --> relative data size: 1.0
Jan 31 02:05:46 np0005603541 frosty_chatterjee[229422]: --> All data devices are unavailable
Jan 31 02:05:46 np0005603541 systemd[1]: libpod-d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2.scope: Deactivated successfully.
Jan 31 02:05:46 np0005603541 python3.9[229556]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:05:46 np0005603541 systemd[1]: Stopping Open-iSCSI...
Jan 31 02:05:46 np0005603541 iscsid[222613]: iscsid shutting down.
Jan 31 02:05:46 np0005603541 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 02:05:46 np0005603541 systemd[1]: Stopped Open-iSCSI.
Jan 31 02:05:46 np0005603541 podman[229567]: 2026-01-31 07:05:46.171220778 +0000 UTC m=+0.026529063 container died d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:05:46 np0005603541 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:05:46 np0005603541 systemd[1]: Starting Open-iSCSI...
Jan 31 02:05:46 np0005603541 systemd[1]: var-lib-containers-storage-overlay-eb4588ca8a81666e495032e56b0daa9e33c4de6815e4a5c58805a1581bb70e17-merged.mount: Deactivated successfully.
Jan 31 02:05:46 np0005603541 systemd[1]: Started Open-iSCSI.
Jan 31 02:05:46 np0005603541 podman[229567]: 2026-01-31 07:05:46.229937923 +0000 UTC m=+0.085246198 container remove d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:05:46 np0005603541 systemd[1]: libpod-conmon-d6a59e03e783f5bc2e0611bf03c6e7dcca2c5b19da345cf01453682a1fc6afa2.scope: Deactivated successfully.
Jan 31 02:05:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.780796377 +0000 UTC m=+0.051280833 container create d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:05:46 np0005603541 systemd[1]: Started libpod-conmon-d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437.scope.
Jan 31 02:05:46 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.761256716 +0000 UTC m=+0.031741202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.859588535 +0000 UTC m=+0.130073011 container init d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.865676394 +0000 UTC m=+0.136160860 container start d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:05:46 np0005603541 peaceful_swartz[229895]: 167 167
Jan 31 02:05:46 np0005603541 systemd[1]: libpod-d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437.scope: Deactivated successfully.
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.871759695 +0000 UTC m=+0.142244161 container attach d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.872380869 +0000 UTC m=+0.142865335 container died d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:05:46 np0005603541 systemd[1]: var-lib-containers-storage-overlay-73ee1df1ccbcb2fd7b6f122c0955628aa22b142633eb6a14a8e928551c852f73-merged.mount: Deactivated successfully.
Jan 31 02:05:46 np0005603541 podman[229879]: 2026-01-31 07:05:46.911075922 +0000 UTC m=+0.181560388 container remove d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 02:05:46 np0005603541 systemd[1]: libpod-conmon-d56bc821b33e5f5cbd1e6920f9660c5d3c29806c13ba2c964de5f8a373b63437.scope: Deactivated successfully.
Jan 31 02:05:46 np0005603541 python3.9[229871]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:05:46 np0005603541 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 02:05:46 np0005603541 multipathd[226672]: exit (signal)
Jan 31 02:05:47 np0005603541 multipathd[226672]: --------shut down-------
Jan 31 02:05:47 np0005603541 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 02:05:47 np0005603541 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 02:05:47 np0005603541 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.055875244 +0000 UTC m=+0.050036002 container create b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:05:47 np0005603541 multipathd[229938]: --------start up--------
Jan 31 02:05:47 np0005603541 multipathd[229938]: read /etc/multipath.conf
Jan 31 02:05:47 np0005603541 multipathd[229938]: path checkers start up
Jan 31 02:05:47 np0005603541 systemd[1]: Started libpod-conmon-b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313.scope.
Jan 31 02:05:47 np0005603541 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.031265559 +0000 UTC m=+0.025426307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:47 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85ef9bd5d9ee21a7c8813ea77183c219d03e2b8969671133d90af29332e0e33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85ef9bd5d9ee21a7c8813ea77183c219d03e2b8969671133d90af29332e0e33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85ef9bd5d9ee21a7c8813ea77183c219d03e2b8969671133d90af29332e0e33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85ef9bd5d9ee21a7c8813ea77183c219d03e2b8969671133d90af29332e0e33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.156026509 +0000 UTC m=+0.150187267 container init b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.164769043 +0000 UTC m=+0.158929801 container start b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.172150615 +0000 UTC m=+0.166311383 container attach b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:05:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:47 np0005603541 python3.9[230102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:05:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:47 np0005603541 blissful_kare[229948]: {
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:    "0": [
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:        {
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "devices": [
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "/dev/loop3"
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            ],
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "lv_name": "ceph_lv0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "lv_size": "7511998464",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "name": "ceph_lv0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "tags": {
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.cluster_name": "ceph",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.crush_device_class": "",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.encrypted": "0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.osd_id": "0",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.type": "block",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:                "ceph.vdo": "0"
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            },
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "type": "block",
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:            "vg_name": "ceph_vg0"
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:        }
Jan 31 02:05:47 np0005603541 blissful_kare[229948]:    ]
Jan 31 02:05:47 np0005603541 blissful_kare[229948]: }
Jan 31 02:05:47 np0005603541 systemd[1]: libpod-b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313.scope: Deactivated successfully.
Jan 31 02:05:47 np0005603541 podman[229921]: 2026-01-31 07:05:47.942191941 +0000 UTC m=+0.936352709 container died b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:05:47 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d85ef9bd5d9ee21a7c8813ea77183c219d03e2b8969671133d90af29332e0e33-merged.mount: Deactivated successfully.
Jan 31 02:05:48 np0005603541 podman[229921]: 2026-01-31 07:05:48.001135621 +0000 UTC m=+0.995296369 container remove b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:05:48 np0005603541 systemd[1]: libpod-conmon-b3c3eafe7f8e9f43676030739e14f401be57a6362046ae0ee3d1441ed1bd2313.scope: Deactivated successfully.
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.573151595 +0000 UTC m=+0.037440063 container create 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:05:48 np0005603541 systemd[1]: Started libpod-conmon-5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c.scope.
Jan 31 02:05:48 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.556611157 +0000 UTC m=+0.020899655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.653170963 +0000 UTC m=+0.117459451 container init 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:05:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.661200481 +0000 UTC m=+0.125488949 container start 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.665061955 +0000 UTC m=+0.129350443 container attach 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:05:48 np0005603541 elated_sammet[230420]: 167 167
Jan 31 02:05:48 np0005603541 systemd[1]: libpod-5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c.scope: Deactivated successfully.
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.668757966 +0000 UTC m=+0.133046434 container died 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:05:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay-31af2f87c42869454467cf08117978dcbff6afc2cec1f2e448d58b8cbc51b972-merged.mount: Deactivated successfully.
Jan 31 02:05:48 np0005603541 podman[230362]: 2026-01-31 07:05:48.718542252 +0000 UTC m=+0.182830720 container remove 5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:48 np0005603541 systemd[1]: libpod-conmon-5cf71583e93aec25f61f37215ca67816f4d935d170f92b69eb5bd8e01c90af7c.scope: Deactivated successfully.
Jan 31 02:05:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:48 np0005603541 podman[230456]: 2026-01-31 07:05:48.85264704 +0000 UTC m=+0.032157432 container create 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:05:48 np0005603541 python3.9[230434]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:05:48 np0005603541 systemd[1]: Started libpod-conmon-372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70.scope.
Jan 31 02:05:48 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:05:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c0a6f102b8528d386459284f49730d70b8bd65b5aa4d711b260f0d1660af11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c0a6f102b8528d386459284f49730d70b8bd65b5aa4d711b260f0d1660af11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c0a6f102b8528d386459284f49730d70b8bd65b5aa4d711b260f0d1660af11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:48 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c0a6f102b8528d386459284f49730d70b8bd65b5aa4d711b260f0d1660af11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:05:48 np0005603541 podman[230456]: 2026-01-31 07:05:48.929818389 +0000 UTC m=+0.109328791 container init 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:05:48 np0005603541 podman[230456]: 2026-01-31 07:05:48.839366654 +0000 UTC m=+0.018877076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:05:48 np0005603541 podman[230456]: 2026-01-31 07:05:48.937343915 +0000 UTC m=+0.116854307 container start 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:48 np0005603541 podman[230456]: 2026-01-31 07:05:48.940234006 +0000 UTC m=+0.119744428 container attach 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:05:49
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.625136) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149625215, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 910, "num_deletes": 251, "total_data_size": 1048153, "memory_usage": 1068472, "flush_reason": "Manual Compaction"}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149629921, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 674997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17745, "largest_seqno": 18654, "table_properties": {"data_size": 671406, "index_size": 1179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10657, "raw_average_key_size": 20, "raw_value_size": 663193, "raw_average_value_size": 1290, "num_data_blocks": 52, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843091, "oldest_key_time": 1769843091, "file_creation_time": 1769843149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4796 microseconds, and 2306 cpu microseconds.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.629968) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 674997 bytes OK
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.629989) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.631251) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.631271) EVENT_LOG_v1 {"time_micros": 1769843149631265, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.631289) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1043716, prev total WAL file size 1043716, number of live WAL files 2.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.631730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(659KB)], [38(9635KB)]
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149631769, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10541806, "oldest_snapshot_seqno": -1}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5212 keys, 7014686 bytes, temperature: kUnknown
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149676476, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7014686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6982904, "index_size": 17656, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 134331, "raw_average_key_size": 25, "raw_value_size": 6891041, "raw_average_value_size": 1322, "num_data_blocks": 706, "num_entries": 5212, "num_filter_entries": 5212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.676714) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7014686 bytes
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.678181) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.5 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.4 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(26.0) write-amplify(10.4) OK, records in: 5702, records dropped: 490 output_compression: NoCompression
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.678203) EVENT_LOG_v1 {"time_micros": 1769843149678193, "job": 18, "event": "compaction_finished", "compaction_time_micros": 44767, "compaction_time_cpu_micros": 19425, "output_level": 6, "num_output_files": 1, "total_output_size": 7014686, "num_input_records": 5702, "num_output_records": 5212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149678371, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149679752, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.631653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.679838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.679845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.679847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.679849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:05:49.679851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]: {
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:        "osd_id": 0,
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:        "type": "bluestore"
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]:    }
Jan 31 02:05:49 np0005603541 gallant_lehmann[230473]: }
Jan 31 02:05:49 np0005603541 systemd[1]: libpod-372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70.scope: Deactivated successfully.
Jan 31 02:05:49 np0005603541 conmon[230473]: conmon 372702fdd35ea44aae5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70.scope/container/memory.events
Jan 31 02:05:49 np0005603541 podman[230456]: 2026-01-31 07:05:49.758545249 +0000 UTC m=+0.938055641 container died 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:05:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-38c0a6f102b8528d386459284f49730d70b8bd65b5aa4d711b260f0d1660af11-merged.mount: Deactivated successfully.
Jan 31 02:05:49 np0005603541 podman[230456]: 2026-01-31 07:05:49.812521327 +0000 UTC m=+0.992031739 container remove 372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:05:49 np0005603541 systemd[1]: libpod-conmon-372702fdd35ea44aae5b4d3acbf84c777b6f3180ba53ae13308453d56983ab70.scope: Deactivated successfully.
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:05:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:49.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev effdf824-0911-4c7d-ae48-9445ac7f83ba does not exist
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d6680472-abb4-4744-abc8-cb72905180a9 does not exist
Jan 31 02:05:49 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 125210b6-974b-414b-9664-2e5918e7d716 does not exist
Jan 31 02:05:50 np0005603541 python3.9[230653]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:05:50 np0005603541 systemd[1]: Reloading.
Jan 31 02:05:50 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:05:50 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:05:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:50 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:50 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:05:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:05:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:05:50 np0005603541 python3.9[230895]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:05:51 np0005603541 network[230912]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:05:51 np0005603541 network[230913]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:05:51 np0005603541 network[230914]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:05:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:53 np0005603541 podman[231029]: 2026-01-31 07:05:53.304767138 +0000 UTC m=+0.047835108 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:05:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:53.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:05:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:05:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:55 np0005603541 python3.9[231208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:55 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:55.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:55 np0005603541 podman[231334]: 2026-01-31 07:05:55.887969363 +0000 UTC m=+0.114991860 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:05:56 np0005603541 python3.9[231381]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:56.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:56 np0005603541 python3.9[231592]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:57 np0005603541 python3.9[231745]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:05:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:05:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:05:58 np0005603541 python3.9[231899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:05:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:05:59 np0005603541 python3.9[232052]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:59 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:05:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:05:59 np0005603541 python3.9[232205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:05:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:05:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:05:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:00 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:00 np0005603541 python3.9[232359]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:06:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:00.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:01 np0005603541 python3.9[232512]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:02 np0005603541 python3.9[232665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:06:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:02.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:06:02 np0005603541 python3.9[232817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:02 np0005603541 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 02:06:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:03 np0005603541 python3.9[232970]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:03.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:03 np0005603541 python3.9[233123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:04 np0005603541 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 02:06:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:04 np0005603541 python3.9[233276]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:04 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:04 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:05 np0005603541 python3.9[233428]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:05 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:05 np0005603541 python3.9[233580]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:05.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:06 np0005603541 python3.9[233733]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:06.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:06 np0005603541 python3.9[233885]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:07 np0005603541 python3.9[234037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:06:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:06:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:08 np0005603541 python3.9[234190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:08 np0005603541 python3.9[234342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:09 np0005603541 python3.9[234494]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:09 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:09 np0005603541 python3.9[234647]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:06:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:10 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:10 np0005603541 python3.9[234799]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:11 np0005603541 python3.9[234951]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 02:06:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:11.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 02:06:12 np0005603541 python3.9[235104]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:06:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:06:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:06:13 np0005603541 python3.9[235256]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:06:13 np0005603541 systemd[1]: Reloading.
Jan 31 02:06:13 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:13 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:13 np0005603541 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 02:06:13 np0005603541 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 02:06:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:13.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:14 np0005603541 python3.9[235447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:14 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:14 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:14 np0005603541 python3.9[235600]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:15 np0005603541 python3.9[235753]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:15.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:16 np0005603541 python3.9[235907]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:16 np0005603541 python3.9[236110]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:17 np0005603541 python3.9[236263]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:17 np0005603541 python3.9[236416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:17.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:18 np0005603541 python3.9[236570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:19 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 919 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:19 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:19 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 919 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:19 np0005603541 python3.9[236724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:19.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:06:20.133 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:06:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:06:20.135 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:06:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:06:20.135 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:06:20 np0005603541 python3.9[236876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:20.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:21 np0005603541 python3.9[237028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:21 np0005603541 python3.9[237180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:21.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:22 np0005603541 python3.9[237333]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:22 np0005603541 python3.9[237485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:23 np0005603541 podman[237609]: 2026-01-31 07:06:23.405265944 +0000 UTC m=+0.068619069 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:06:23 np0005603541 python3.9[237654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:24 np0005603541 python3.9[237810]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:24 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:24 np0005603541 python3.9[237962]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:24 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:25 np0005603541 python3.9[238114]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:26 np0005603541 podman[238140]: 2026-01-31 07:06:26.034357399 +0000 UTC m=+0.074551045 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:06:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:28.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:29 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:30 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:31 np0005603541 python3.9[238295]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 02:06:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:31 np0005603541 python3.9[238449]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:06:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:32 np0005603541 python3.9[238607]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:06:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:32.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:34 np0005603541 systemd-logind[817]: New session 51 of user zuul.
Jan 31 02:06:34 np0005603541 systemd[1]: Started Session 51 of User zuul.
Jan 31 02:06:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:34 np0005603541 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 02:06:34 np0005603541 systemd-logind[817]: Session 51 logged out. Waiting for processes to exit.
Jan 31 02:06:34 np0005603541 systemd-logind[817]: Removed session 51.
Jan 31 02:06:34 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:34 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:35 np0005603541 python3.9[238794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:35 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:35 np0005603541 python3.9[238915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843194.6658506-2659-228988097213352/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:36 np0005603541 python3.9[239066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:36 np0005603541 python3.9[239192]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:36.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:37 np0005603541 python3.9[239342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:37 np0005603541 python3.9[239463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843196.7339542-2659-50224925608439/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:38 np0005603541 python3.9[239614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:38 np0005603541 python3.9[239735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843197.8381221-2659-109423552793706/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:39 np0005603541 python3.9[239885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:39 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 939 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:39 np0005603541 python3.9[240007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843198.8610978-2659-274451519948632/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:40 np0005603541 python3.9[240157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:40 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 939 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:40.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:40 np0005603541 python3.9[240278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843199.9465904-2659-269256932713031/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:41 np0005603541 python3.9[240430]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:42 np0005603541 python3.9[240583]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:06:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:43 np0005603541 python3.9[240735]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:06:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:43 np0005603541 python3.9[240887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:44 np0005603541 python3.9[241011]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769843203.3076794-2980-201329111986635/.source _original_basename=._os4zlmp follow=False checksum=3ff64c3930588132cc4bba29459247085785bfe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 02:06:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:44 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:45 np0005603541 python3.9[241163]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:06:45 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:45 np0005603541 python3.9[241315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:46 np0005603541 python3.9[241437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843205.3285465-3058-259269141993398/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:46 np0005603541 python3.9[241587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:06:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:47 np0005603541 python3.9[241708]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843206.4215794-3103-89048955742420/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:06:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:06:48 np0005603541 python3.9[241861]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 02:06:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:06:49
Jan 31 02:06:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:06:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:06:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta']
Jan 31 02:06:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:06:49 np0005603541 python3.9[242013]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:06:49 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:49 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:50 np0005603541 python3[242172]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:06:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:50.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:06:51 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c0ffd66f-e427-4f71-a91a-178413a5d7f4 does not exist
Jan 31 02:06:51 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev b183587c-faae-437e-ad76-638b0feefa79 does not exist
Jan 31 02:06:51 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 652c7263-4422-43a0-b4fa-ea062b96c5d1 does not exist
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.767893501 +0000 UTC m=+0.071662924 container create 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.728080671 +0000 UTC m=+0.031850114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:51 np0005603541 systemd[1]: Started libpod-conmon-3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1.scope.
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:06:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:06:51 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.884233103 +0000 UTC m=+0.188002546 container init 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.894891445 +0000 UTC m=+0.198660868 container start 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:51 np0005603541 gracious_heyrovsky[242491]: 167 167
Jan 31 02:06:51 np0005603541 systemd[1]: libpod-3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1.scope: Deactivated successfully.
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.903157359 +0000 UTC m=+0.206926802 container attach 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.904867791 +0000 UTC m=+0.208637214 container died 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:51 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d2ee1eea8e5519fe6507d71a3a0e79d667f9bef1bb7d6d48e28dbb1bf9f3626b-merged.mount: Deactivated successfully.
Jan 31 02:06:51 np0005603541 podman[242474]: 2026-01-31 07:06:51.997177212 +0000 UTC m=+0.300946645 container remove 3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:52 np0005603541 systemd[1]: libpod-conmon-3396f714061b0ad9664630bc472d7e99d2a5e5d52a0f9d15698cb742afb860e1.scope: Deactivated successfully.
Jan 31 02:06:52 np0005603541 podman[242517]: 2026-01-31 07:06:52.235831294 +0000 UTC m=+0.112135191 container create 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:06:52 np0005603541 podman[242517]: 2026-01-31 07:06:52.151262033 +0000 UTC m=+0.027565970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:52 np0005603541 systemd[1]: Started libpod-conmon-6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d.scope.
Jan 31 02:06:52 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:06:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:52 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:52 np0005603541 podman[242517]: 2026-01-31 07:06:52.388523211 +0000 UTC m=+0.264827118 container init 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:52 np0005603541 podman[242517]: 2026-01-31 07:06:52.396092587 +0000 UTC m=+0.272396494 container start 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:06:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:52 np0005603541 podman[242517]: 2026-01-31 07:06:52.769650328 +0000 UTC m=+0.645954235 container attach 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:06:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:52.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:53 np0005603541 relaxed_pare[242533]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:06:53 np0005603541 relaxed_pare[242533]: --> relative data size: 1.0
Jan 31 02:06:53 np0005603541 relaxed_pare[242533]: --> All data devices are unavailable
Jan 31 02:06:53 np0005603541 podman[242517]: 2026-01-31 07:06:53.166797548 +0000 UTC m=+1.043101475 container died 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:06:53 np0005603541 systemd[1]: libpod-6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d.scope: Deactivated successfully.
Jan 31 02:06:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:06:54 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:06:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:54.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:56 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:06:56 np0005603541 podman[242577]: 2026-01-31 07:06:56.09485828 +0000 UTC m=+2.653720503 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 02:06:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:56.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:06:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:57.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:06:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:06:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4210 writes, 19K keys, 4208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 4209 writes, 4207 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1801 writes, 8377 keys, 1799 commit groups, 1.0 writes per commit group, ingest: 10.48 MB, 0.02 MB/s#012Interval WAL: 1800 writes, 1798 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.7      0.25              0.04         9    0.028       0      0       0.0       0.0#012  L6      1/0    6.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    129.7    107.9      0.63              0.17         8    0.079     41K   4380       0.0       0.0#012 Sum      1/0    6.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     92.9    100.5      0.88              0.22        17    0.052     41K   4380       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4    124.3    119.3      0.51              0.16        12    0.042     33K   3562       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    129.7    107.9      0.63              0.17         8    0.079     41K   4380       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.3      0.25              0.04         8    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.9 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561559fff1f0#2 capacity: 308.00 MB usage: 4.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(233,4.11 MB,1.33495%) FilterBlock(18,126.67 KB,0.0401633%) IndexBlock(18,221.84 KB,0.0703391%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:06:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:06:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:58.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:06:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:06:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:06:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:06:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-233419d9569044a20e7cc3b1338a5f873ea539b9d9e2b808f19e59e83f37188f-merged.mount: Deactivated successfully.
Jan 31 02:07:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 959 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:02 np0005603541 podman[242517]: 2026-01-31 07:07:02.58265853 +0000 UTC m=+10.458962437 container remove 6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pare, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 02:07:02 np0005603541 systemd[1]: libpod-conmon-6c01bb50ecd300ddabfa4c3951e3147a06433a276ce4fd0748b751794f4ee81d.scope: Deactivated successfully.
Jan 31 02:07:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:02 np0005603541 podman[242621]: 2026-01-31 07:07:02.835643105 +0000 UTC m=+6.715788191 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:07:02 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 959 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:07:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:07:03 np0005603541 podman[242280]: 2026-01-31 07:07:03.090080345 +0000 UTC m=+12.306800280 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.188850945 +0000 UTC m=+0.018491026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.295323634 +0000 UTC m=+0.124963695 container create 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:07:03 np0005603541 podman[242890]: 2026-01-31 07:07:03.212532477 +0000 UTC m=+0.027669692 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:07:03 np0005603541 systemd[1]: Started libpod-conmon-705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb.scope.
Jan 31 02:07:03 np0005603541 podman[242890]: 2026-01-31 07:07:03.408507909 +0000 UTC m=+0.223645074 container create 8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm)
Jan 31 02:07:03 np0005603541 python3[242172]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 02:07:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.446306629 +0000 UTC m=+0.275946710 container init 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.455878575 +0000 UTC m=+0.285518606 container start 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:07:03 np0005603541 nice_euclid[242912]: 167 167
Jan 31 02:07:03 np0005603541 systemd[1]: libpod-705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb.scope: Deactivated successfully.
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.472384091 +0000 UTC m=+0.302024122 container attach 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.47273687 +0000 UTC m=+0.302376901 container died 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:07:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9b5b799025b069b4674e8ff6543a47a1346d125c515da78eb3cc8ddf132d41a6-merged.mount: Deactivated successfully.
Jan 31 02:07:03 np0005603541 podman[242879]: 2026-01-31 07:07:03.61213367 +0000 UTC m=+0.441773701 container remove 705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:03 np0005603541 systemd[1]: libpod-conmon-705c5730dcf2d48d1dddfa1f02ecd15fd0230f98be492a3767b96f52f4f0e2fb.scope: Deactivated successfully.
Jan 31 02:07:03 np0005603541 podman[242984]: 2026-01-31 07:07:03.796983718 +0000 UTC m=+0.061984237 container create 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:03 np0005603541 podman[242984]: 2026-01-31 07:07:03.761626357 +0000 UTC m=+0.026626936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:03 np0005603541 systemd[1]: Started libpod-conmon-94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2.scope.
Jan 31 02:07:03 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b93e33e85959065c87167a346114f974bcd235c10672a9fffe0fcf7d4621cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b93e33e85959065c87167a346114f974bcd235c10672a9fffe0fcf7d4621cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b93e33e85959065c87167a346114f974bcd235c10672a9fffe0fcf7d4621cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:03 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b93e33e85959065c87167a346114f974bcd235c10672a9fffe0fcf7d4621cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:03 np0005603541 podman[242984]: 2026-01-31 07:07:03.893754398 +0000 UTC m=+0.158754977 container init 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:03 np0005603541 podman[242984]: 2026-01-31 07:07:03.898996688 +0000 UTC m=+0.163997217 container start 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:07:03 np0005603541 podman[242984]: 2026-01-31 07:07:03.910635654 +0000 UTC m=+0.175636193 container attach 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:07:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:04 np0005603541 eager_feistel[243000]: {
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:    "0": [
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:        {
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "devices": [
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "/dev/loop3"
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            ],
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "lv_name": "ceph_lv0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "lv_size": "7511998464",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "name": "ceph_lv0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "tags": {
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.cluster_name": "ceph",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.crush_device_class": "",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.encrypted": "0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.osd_id": "0",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.type": "block",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:                "ceph.vdo": "0"
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            },
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "type": "block",
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:            "vg_name": "ceph_vg0"
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:        }
Jan 31 02:07:04 np0005603541 eager_feistel[243000]:    ]
Jan 31 02:07:04 np0005603541 eager_feistel[243000]: }
Jan 31 02:07:04 np0005603541 systemd[1]: libpod-94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2.scope: Deactivated successfully.
Jan 31 02:07:04 np0005603541 podman[242984]: 2026-01-31 07:07:04.660073532 +0000 UTC m=+0.925074061 container died 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-35b93e33e85959065c87167a346114f974bcd235c10672a9fffe0fcf7d4621cd-merged.mount: Deactivated successfully.
Jan 31 02:07:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:04 np0005603541 podman[242984]: 2026-01-31 07:07:04.734543234 +0000 UTC m=+0.999543783 container remove 94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_feistel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:07:04 np0005603541 systemd[1]: libpod-conmon-94a07b582fc6749e92034c7100197e2c587dc9693b28806c482fdb8ac1bfc5b2.scope: Deactivated successfully.
Jan 31 02:07:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.375676698 +0000 UTC m=+0.059988687 container create 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:07:05 np0005603541 systemd[1]: Started libpod-conmon-9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c.scope.
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.34281722 +0000 UTC m=+0.027129219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.468977024 +0000 UTC m=+0.153289033 container init 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.477440202 +0000 UTC m=+0.161752151 container start 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:07:05 np0005603541 elated_perlman[243180]: 167 167
Jan 31 02:07:05 np0005603541 systemd[1]: libpod-9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c.scope: Deactivated successfully.
Jan 31 02:07:05 np0005603541 conmon[243180]: conmon 9ee927223a901929542d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c.scope/container/memory.events
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.486196868 +0000 UTC m=+0.170508917 container attach 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.487174372 +0000 UTC m=+0.171486351 container died 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1ff521b0f716029abf387ff61af590b35150921f44c03cfda05236c2c9b6dee3-merged.mount: Deactivated successfully.
Jan 31 02:07:05 np0005603541 podman[243164]: 2026-01-31 07:07:05.544843671 +0000 UTC m=+0.229155650 container remove 9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:07:05 np0005603541 systemd[1]: libpod-conmon-9ee927223a901929542d3ae90b65c8c4bd4694b21a48b5b035f700c04c175c7c.scope: Deactivated successfully.
Jan 31 02:07:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:05 np0005603541 podman[243204]: 2026-01-31 07:07:05.725621708 +0000 UTC m=+0.059149376 container create a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:07:05 np0005603541 systemd[1]: Started libpod-conmon-a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251.scope.
Jan 31 02:07:05 np0005603541 podman[243204]: 2026-01-31 07:07:05.700998073 +0000 UTC m=+0.034525821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:05 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb884a3a479474f08fa5f1d90b67bf3fff75371fc16c23d070f47d37d0c3cecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb884a3a479474f08fa5f1d90b67bf3fff75371fc16c23d070f47d37d0c3cecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb884a3a479474f08fa5f1d90b67bf3fff75371fc16c23d070f47d37d0c3cecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:05 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb884a3a479474f08fa5f1d90b67bf3fff75371fc16c23d070f47d37d0c3cecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:05 np0005603541 podman[243204]: 2026-01-31 07:07:05.855075654 +0000 UTC m=+0.188603402 container init a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:07:05 np0005603541 podman[243204]: 2026-01-31 07:07:05.859705837 +0000 UTC m=+0.193233495 container start a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:07:05 np0005603541 podman[243204]: 2026-01-31 07:07:05.870741629 +0000 UTC m=+0.204269337 container attach a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:05.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:06 np0005603541 gracious_napier[243220]: {
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:        "osd_id": 0,
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:        "type": "bluestore"
Jan 31 02:07:06 np0005603541 gracious_napier[243220]:    }
Jan 31 02:07:06 np0005603541 gracious_napier[243220]: }
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:06 np0005603541 systemd[1]: libpod-a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251.scope: Deactivated successfully.
Jan 31 02:07:06 np0005603541 podman[243204]: 2026-01-31 07:07:06.720034634 +0000 UTC m=+1.053562332 container died a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:07:06 np0005603541 systemd[1]: var-lib-containers-storage-overlay-fb884a3a479474f08fa5f1d90b67bf3fff75371fc16c23d070f47d37d0c3cecc-merged.mount: Deactivated successfully.
Jan 31 02:07:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:06.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:06 np0005603541 podman[243204]: 2026-01-31 07:07:06.906374729 +0000 UTC m=+1.239902427 container remove a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 02:07:06 np0005603541 systemd[1]: libpod-conmon-a280e284bbbe68fd5f2b0ef7b5e633eaa79f5baaed1642ba6d27ecdc8f392251.scope: Deactivated successfully.
Jan 31 02:07:06 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 969 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:07:07 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev fdadd2bb-1b22-4a09-954f-422a2a81ada0 does not exist
Jan 31 02:07:07 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev be717c4f-b3aa-44b9-b0c7-fd72852536af does not exist
Jan 31 02:07:07 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 21fc79ec-ca4a-4569-b853-7e4ee8e7e884 does not exist
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 969 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:07:07 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:07:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:07.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:08 np0005603541 python3.9[243433]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:07:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:08.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:09.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:09 np0005603541 python3.9[243588]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:07:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:10.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:11 np0005603541 python3.9[243740]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:07:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:12 np0005603541 python3[243893]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:07:12 np0005603541 podman[243927]: 2026-01-31 07:07:12.191126692 +0000 UTC m=+0.023805317 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:07:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:12 np0005603541 podman[243927]: 2026-01-31 07:07:12.88933734 +0000 UTC m=+0.722015935 container create bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:07:12 np0005603541 python3[243893]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 02:07:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:12.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:13 np0005603541 python3.9[244115]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:07:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:14 np0005603541 python3.9[244270]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:07:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:14 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 979 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:14 np0005603541 python3.9[244421]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843234.3753247-3391-202640991581394/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:07:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:15 np0005603541 python3.9[244497]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:07:15 np0005603541 systemd[1]: Reloading.
Jan 31 02:07:15 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:15 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:15 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:16 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 979 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:16 np0005603541 python3.9[244609]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:07:16 np0005603541 systemd[1]: Reloading.
Jan 31 02:07:16 np0005603541 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:16 np0005603541 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:16 np0005603541 systemd[1]: Starting nova_compute container...
Jan 31 02:07:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:16 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:16 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:16 np0005603541 podman[244698]: 2026-01-31 07:07:16.77580368 +0000 UTC m=+0.090216980 container init bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:07:16 np0005603541 podman[244698]: 2026-01-31 07:07:16.78029102 +0000 UTC m=+0.094704280 container start bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + sudo -E kolla_set_configs
Jan 31 02:07:16 np0005603541 podman[244698]: nova_compute
Jan 31 02:07:16 np0005603541 systemd[1]: Started nova_compute container.
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Validating config file
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying service configuration files
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:16.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Writing out command to execute
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:16 np0005603541 nova_compute[244715]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:07:16 np0005603541 nova_compute[244715]: ++ cat /run_command
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + CMD=nova-compute
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + ARGS=
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + sudo kolla_copy_cacerts
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + [[ ! -n '' ]]
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + . kolla_extend_start
Jan 31 02:07:16 np0005603541 nova_compute[244715]: Running command: 'nova-compute'
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + umask 0022
Jan 31 02:07:16 np0005603541 nova_compute[244715]: + exec nova-compute
Jan 31 02:07:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:17.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:18 np0005603541 python3.9[244877]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:18.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:19 np0005603541 python3.9[245027]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:07:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:07:20.135 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:07:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:07:20.136 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:07:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:07:20.136 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:07:20 np0005603541 python3.9[245179]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:07:20 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:20 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 984 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:20.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:21 np0005603541 nova_compute[244715]: 2026-01-31 07:07:21.866 244719 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:21 np0005603541 nova_compute[244715]: 2026-01-31 07:07:21.866 244719 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:21 np0005603541 nova_compute[244715]: 2026-01-31 07:07:21.866 244719 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:21 np0005603541 nova_compute[244715]: 2026-01-31 07:07:21.866 244719 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:07:21 np0005603541 python3.9[245332]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:07:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:21 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:07:21 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 984 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:21 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:07:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:21.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:22 np0005603541 nova_compute[244715]: 2026-01-31 07:07:22.224 244719 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:07:22 np0005603541 nova_compute[244715]: 2026-01-31 07:07:22.258 244719 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:07:22 np0005603541 nova_compute[244715]: 2026-01-31 07:07:22.258 244719 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:07:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:22 np0005603541 python3.9[245513]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:07:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:22.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:22 np0005603541 systemd[1]: Stopping nova_compute container...
Jan 31 02:07:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:23 np0005603541 systemd[1]: libpod-bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467.scope: Deactivated successfully.
Jan 31 02:07:23 np0005603541 systemd[1]: libpod-bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467.scope: Consumed 2.537s CPU time.
Jan 31 02:07:23 np0005603541 podman[245517]: 2026-01-31 07:07:23.14358536 +0000 UTC m=+0.186075909 container died bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 02:07:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467-userdata-shm.mount: Deactivated successfully.
Jan 31 02:07:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5-merged.mount: Deactivated successfully.
Jan 31 02:07:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:25.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:29 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 989 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:29.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:30 np0005603541 podman[245552]: 2026-01-31 07:07:30.76974622 +0000 UTC m=+0.060421457 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:07:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:32 np0005603541 podman[245517]: 2026-01-31 07:07:32.087978315 +0000 UTC m=+9.130468854 container cleanup bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:07:32 np0005603541 podman[245517]: nova_compute
Jan 31 02:07:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:32 np0005603541 podman[245573]: nova_compute
Jan 31 02:07:32 np0005603541 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 02:07:32 np0005603541 systemd[1]: Stopped nova_compute container.
Jan 31 02:07:32 np0005603541 systemd[1]: Starting nova_compute container...
Jan 31 02:07:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6222da15e3829c2e0227e899044220c377d0c53b6bf4136b9890d3fc6a32eb5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:32 np0005603541 podman[245586]: 2026-01-31 07:07:32.230822915 +0000 UTC m=+0.082758530 container init bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:07:32 np0005603541 podman[245586]: 2026-01-31 07:07:32.236357902 +0000 UTC m=+0.088293497 container start bd9288f17d9c7d444f2031c517a6975c8fd8bd663d4e5554826ccf9242c5c467 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + sudo -E kolla_set_configs
Jan 31 02:07:32 np0005603541 podman[245586]: nova_compute
Jan 31 02:07:32 np0005603541 systemd[1]: Started nova_compute container.
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Validating config file
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying service configuration files
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Writing out command to execute
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:32 np0005603541 nova_compute[245601]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:07:32 np0005603541 nova_compute[245601]: ++ cat /run_command
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + CMD=nova-compute
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + ARGS=
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + sudo kolla_copy_cacerts
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + [[ ! -n '' ]]
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + . kolla_extend_start
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + umask 0022
Jan 31 02:07:32 np0005603541 nova_compute[245601]: + exec nova-compute
Jan 31 02:07:32 np0005603541 nova_compute[245601]: Running command: 'nova-compute'
Jan 31 02:07:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:33 np0005603541 python3.9[245764]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 989 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:33 np0005603541 systemd[1]: Started libpod-conmon-8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86.scope.
Jan 31 02:07:33 np0005603541 podman[245803]: 2026-01-31 07:07:33.56549115 +0000 UTC m=+0.383901543 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:07:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:07:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cd5d43b4646a44b99a9dfe67ea37c1b894741189b84d428c9539108f4e520f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cd5d43b4646a44b99a9dfe67ea37c1b894741189b84d428c9539108f4e520f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:33 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cd5d43b4646a44b99a9dfe67ea37c1b894741189b84d428c9539108f4e520f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:33 np0005603541 podman[245790]: 2026-01-31 07:07:33.65274086 +0000 UTC m=+0.534418283 container init 8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 02:07:33 np0005603541 podman[245790]: 2026-01-31 07:07:33.660900941 +0000 UTC m=+0.542578334 container start 8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 02:07:33 np0005603541 nova_compute_init[245839]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 02:07:33 np0005603541 systemd[1]: libpod-8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86.scope: Deactivated successfully.
Jan 31 02:07:33 np0005603541 python3.9[245764]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 02:07:33 np0005603541 podman[245840]: 2026-01-31 07:07:33.819345366 +0000 UTC m=+0.077814739 container died 8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:07:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86-userdata-shm.mount: Deactivated successfully.
Jan 31 02:07:34 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c2cd5d43b4646a44b99a9dfe67ea37c1b894741189b84d428c9539108f4e520f-merged.mount: Deactivated successfully.
Jan 31 02:07:34 np0005603541 podman[245840]: 2026-01-31 07:07:34.244984246 +0000 UTC m=+0.503453579 container cleanup 8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:34 np0005603541 systemd[1]: libpod-conmon-8ee9cde5e48af96c629d69ac5bc0d49ccb5ce64a5ae77441c2a89ec9c6ee1a86.scope: Deactivated successfully.
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.457 245605 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.458 245605 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.459 245605 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.459 245605 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.604 245605 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.616 245605 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:07:34 np0005603541 nova_compute[245601]: 2026-01-31 07:07:34.616 245605 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:07:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:34 np0005603541 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 02:07:34 np0005603541 systemd[1]: session-50.scope: Consumed 1min 46.500s CPU time.
Jan 31 02:07:34 np0005603541 systemd-logind[817]: Session 50 logged out. Waiting for processes to exit.
Jan 31 02:07:34 np0005603541 systemd-logind[817]: Removed session 50.
Jan 31 02:07:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.412 245605 INFO nova.virt.driver [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.620 245605 INFO nova.compute.provider_config [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.643 245605 DEBUG oslo_concurrency.lockutils [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.644 245605 DEBUG oslo_concurrency.lockutils [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.644 245605 DEBUG oslo_concurrency.lockutils [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.644 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.645 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.645 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.645 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.645 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.646 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.646 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.646 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.646 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.647 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.647 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.648 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.648 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.648 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.648 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.649 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.649 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.649 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.649 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.649 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.650 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.650 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.650 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.650 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.651 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.651 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.651 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.651 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.652 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.652 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.652 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.652 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.652 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.653 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.653 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.653 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.654 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.654 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.654 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.654 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.654 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.655 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.655 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.655 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.655 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.656 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.656 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.656 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.656 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.656 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.657 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.657 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.657 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.657 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.658 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.658 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.658 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.658 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.658 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.659 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.659 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.659 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.659 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.659 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.660 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.660 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.660 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.660 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.660 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.661 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.661 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.661 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.661 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.661 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.662 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.662 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.662 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.662 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.662 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.663 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.663 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.663 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.663 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.663 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.664 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.665 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.665 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.665 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.665 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.665 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.666 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.666 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.666 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.666 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.666 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.667 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.668 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.668 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.668 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.668 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.668 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.669 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.670 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.670 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.670 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.670 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.670 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.671 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.671 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.671 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.671 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.672 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.673 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.673 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.673 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.673 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.673 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.674 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.674 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.674 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.674 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.674 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.675 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.676 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.677 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.678 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.679 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.680 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 999 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.681 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.682 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.683 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.684 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.685 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.686 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.686 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.686 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.686 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.686 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.687 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.688 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.689 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.690 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.691 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.692 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.693 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.694 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.695 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.696 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.697 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.698 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.699 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.700 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.701 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.702 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.702 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.702 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.702 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.702 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.703 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.704 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.705 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.705 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.705 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.705 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.705 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.706 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.706 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.706 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.706 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.707 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.708 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.709 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.710 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.711 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.712 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.713 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.714 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.715 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.716 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.717 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.718 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.719 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.720 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.721 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.722 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.723 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.724 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.725 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 WARNING oslo_config.cfg [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 02:07:35 np0005603541 nova_compute[245601]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 02:07:35 np0005603541 nova_compute[245601]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 02:07:35 np0005603541 nova_compute[245601]: and ``live_migration_inbound_addr`` respectively.
Jan 31 02:07:35 np0005603541 nova_compute[245601]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.726 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.727 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.728 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rbd_secret_uuid        = ef73c6e0-6d85-55c2-9347-1f544d3e3d3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.729 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.730 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.731 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.732 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.733 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.734 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.735 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.736 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.737 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.738 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.739 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.740 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.741 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.742 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.743 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.744 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.745 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.746 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.746 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.746 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.746 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.746 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.747 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.748 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.749 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.750 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.751 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.751 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.751 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.751 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.751 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.752 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.752 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.752 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.752 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.752 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.753 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.754 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.755 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.756 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.757 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.758 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.759 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.760 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.761 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.762 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.763 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.764 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.765 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.766 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.767 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.768 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.769 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.770 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.771 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.772 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.773 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.774 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.775 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.776 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.777 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.778 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.779 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.780 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.781 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.782 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.783 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.784 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.785 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.786 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.787 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.788 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.788 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.788 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.788 245605 DEBUG oslo_service.service [None req-183ece37-b1b2-4a07-9e70-b536425f0a5d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.789 245605 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.829 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.830 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.830 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.830 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 02:07:35 np0005603541 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:07:35 np0005603541 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.896 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7faaf5158e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.899 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7faaf5158e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.899 245605 INFO nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.956 245605 WARNING nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 02:07:35 np0005603541 nova_compute[245601]: 2026-01-31 07:07:35.956 245605 DEBUG nova.virt.libvirt.volume.mount [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 02:07:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.747 245605 INFO nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <host>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <uuid>447bf06a-a3b2-47e0-813a-295d0298e0f3</uuid>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <arch>x86_64</arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model>EPYC-Rome-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <vendor>AMD</vendor>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <microcode version='16777317'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <signature family='23' model='49' stepping='0'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='x2apic'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='tsc-deadline'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='osxsave'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='hypervisor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='tsc_adjust'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='spec-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='stibp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='arch-capabilities'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='cmp_legacy'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='topoext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='virt-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='lbrv'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='tsc-scale'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='vmcb-clean'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='pause-filter'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='pfthreshold'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='svme-addr-chk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='rdctl-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='mds-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature name='pschange-mc-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <pages unit='KiB' size='4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <pages unit='KiB' size='2048'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <pages unit='KiB' size='1048576'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <power_management>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <suspend_mem/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </power_management>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <iommu support='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <migration_features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <live/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <uri_transports>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <uri_transport>tcp</uri_transport>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <uri_transport>rdma</uri_transport>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </uri_transports>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </migration_features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <topology>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <cells num='1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <cell id='0'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <memory unit='KiB'>7864296</memory>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <pages unit='KiB' size='4'>1966074</pages>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <distances>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <sibling id='0' value='10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          </distances>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          <cpus num='8'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:          </cpus>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        </cell>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </cells>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </topology>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <cache>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </cache>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <secmodel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model>selinux</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <doi>0</doi>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </secmodel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <secmodel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model>dac</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <doi>0</doi>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </secmodel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </host>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <guest>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <os_type>hvm</os_type>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <arch name='i686'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <wordsize>32</wordsize>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <domain type='qemu'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <domain type='kvm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <pae/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <nonpae/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <acpi default='on' toggle='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <apic default='on' toggle='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <cpuselection/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <deviceboot/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <externalSnapshot/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </guest>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <guest>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <os_type>hvm</os_type>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <arch name='x86_64'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <wordsize>64</wordsize>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <domain type='qemu'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <domain type='kvm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <acpi default='on' toggle='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <apic default='on' toggle='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <cpuselection/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <deviceboot/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <externalSnapshot/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </guest>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 
Jan 31 02:07:36 np0005603541 nova_compute[245601]: </capabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: #033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.756 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.778 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 02:07:36 np0005603541 nova_compute[245601]: <domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <domain>kvm</domain>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <arch>i686</arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <vcpu max='240'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <iothreads supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <os supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='firmware'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <loader supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>rom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pflash</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='readonly'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>yes</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='secure'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </loader>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </os>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='maximum' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='maximumMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-model' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <vendor>AMD</vendor>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='x2apic'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='stibp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='succor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lbrv'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='custom' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Dhyana-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 999 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <memoryBacking supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='sourceType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>anonymous</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>memfd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </memoryBacking>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <disk supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='diskDevice'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>disk</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cdrom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>floppy</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>lun</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ide</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>fdc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>sata</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </disk>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <graphics supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vnc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egl-headless</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </graphics>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <video supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='modelType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vga</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cirrus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>none</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>bochs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ramfb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </video>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hostdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='mode'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>subsystem</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='startupPolicy'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>mandatory</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>requisite</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>optional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='subsysType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pci</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='capsType'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='pciBackend'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hostdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <rng supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>random</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </rng>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <filesystem supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='driverType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>path</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>handle</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtiofs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </filesystem>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tpm supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-tis</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-crb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emulator</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>external</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendVersion'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>2.0</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </tpm>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <redirdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </redirdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <channel supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </channel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <crypto supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </crypto>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <interface supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>passt</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </interface>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <panic supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>isa</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>hyperv</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </panic>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <console supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>null</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dev</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pipe</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stdio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>udp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tcp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu-vdagent</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </console>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <gic supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <vmcoreinfo supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <genid supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backingStoreInput supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backup supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <async-teardown supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <s390-pv supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <ps2 supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tdx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sev supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sgx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hyperv supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='features'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>relaxed</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vapic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>spinlocks</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vpindex</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>runtime</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>synic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stimer</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reset</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vendor_id</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>frequencies</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reenlightenment</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tlbflush</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ipi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>avic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emsr_bitmap</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>xmm_input</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <spinlocks>4095</spinlocks>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <stimer_direct>on</stimer_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hyperv>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <launchSecurity supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: </domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.796 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 02:07:36 np0005603541 nova_compute[245601]: <domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <domain>kvm</domain>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <arch>i686</arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <vcpu max='4096'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <iothreads supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <os supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='firmware'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <loader supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>rom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pflash</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='readonly'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>yes</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='secure'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </loader>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </os>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='maximum' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='maximumMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-model' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <vendor>AMD</vendor>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='x2apic'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='stibp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='succor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lbrv'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='custom' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Dhyana-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <memoryBacking supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='sourceType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>anonymous</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>memfd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </memoryBacking>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <disk supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='diskDevice'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>disk</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cdrom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>floppy</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>lun</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>fdc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>sata</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </disk>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <graphics supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vnc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egl-headless</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </graphics>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <video supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='modelType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vga</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cirrus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>none</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>bochs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ramfb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </video>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hostdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='mode'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>subsystem</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='startupPolicy'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>mandatory</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>requisite</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>optional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='subsysType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pci</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='capsType'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='pciBackend'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hostdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <rng supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>random</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </rng>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <filesystem supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='driverType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>path</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>handle</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtiofs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </filesystem>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tpm supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-tis</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-crb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emulator</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>external</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendVersion'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>2.0</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </tpm>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <redirdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </redirdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <channel supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </channel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <crypto supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </crypto>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <interface supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>passt</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </interface>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <panic supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>isa</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>hyperv</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </panic>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <console supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>null</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dev</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pipe</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stdio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>udp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tcp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu-vdagent</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </console>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <gic supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <vmcoreinfo supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <genid supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backingStoreInput supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backup supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <async-teardown supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <s390-pv supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <ps2 supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tdx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sev supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sgx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hyperv supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='features'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>relaxed</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vapic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>spinlocks</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vpindex</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>runtime</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>synic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stimer</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reset</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vendor_id</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>frequencies</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reenlightenment</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tlbflush</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ipi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>avic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emsr_bitmap</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>xmm_input</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <spinlocks>4095</spinlocks>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <stimer_direct>on</stimer_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hyperv>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <launchSecurity supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: </domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.837 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.843 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 02:07:36 np0005603541 nova_compute[245601]: <domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <domain>kvm</domain>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <arch>x86_64</arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <vcpu max='240'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <iothreads supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <os supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='firmware'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <loader supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>rom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pflash</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='readonly'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>yes</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='secure'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </loader>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </os>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='maximum' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='maximumMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-model' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <vendor>AMD</vendor>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='x2apic'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='stibp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='succor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lbrv'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='custom' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Dhyana-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:07:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='athlon-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='core2duo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='coreduo-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='n270-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='phenom-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <memoryBacking supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='sourceType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>anonymous</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>memfd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </memoryBacking>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <disk supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='diskDevice'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>disk</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cdrom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>floppy</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>lun</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ide</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>fdc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>sata</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </disk>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <graphics supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vnc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egl-headless</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </graphics>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <video supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='modelType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vga</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>cirrus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>none</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>bochs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ramfb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </video>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hostdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='mode'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>subsystem</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='startupPolicy'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>mandatory</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>requisite</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>optional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='subsysType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pci</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='capsType'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='pciBackend'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hostdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <rng supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>random</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>egd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </rng>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <filesystem supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='driverType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>path</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>handle</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>virtiofs</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </filesystem>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tpm supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-tis</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tpm-crb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emulator</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>external</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendVersion'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>2.0</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </tpm>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <redirdev supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </redirdev>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <channel supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </channel>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <crypto supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </crypto>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <interface supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='backendType'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>passt</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </interface>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <panic supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>isa</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>hyperv</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </panic>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <console supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>null</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vc</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dev</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>file</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pipe</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stdio</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>udp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tcp</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>qemu-vdagent</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </console>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </devices>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <gic supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <vmcoreinfo supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <genid supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backingStoreInput supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <backup supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <async-teardown supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <s390-pv supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <ps2 supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <tdx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sev supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <sgx supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <hyperv supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='features'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>relaxed</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vapic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>spinlocks</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vpindex</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>runtime</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>synic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>stimer</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reset</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>vendor_id</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>frequencies</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>reenlightenment</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>tlbflush</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>ipi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>avic</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>emsr_bitmap</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>xmm_input</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <spinlocks>4095</spinlocks>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <stimer_direct>on</stimer_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </defaults>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </hyperv>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <launchSecurity supported='no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </features>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: </domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:07:36 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.907 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 02:07:36 np0005603541 nova_compute[245601]: <domainCapabilities>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <domain>kvm</domain>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <arch>x86_64</arch>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <vcpu max='4096'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <iothreads supported='yes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <os supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <enum name='firmware'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>efi</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <loader supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>rom</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>pflash</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='readonly'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>yes</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='secure'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>yes</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>no</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </loader>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  </os>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:  <cpu>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='maximum' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <enum name='maximumMigratable'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>on</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <value>off</value>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='host-model' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <vendor>AMD</vendor>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='x2apic'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='stibp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='succor'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lbrv'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:    <mode name='custom' supported='yes'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Broadwell-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bhi-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ddpd-u'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sha512'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm3'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sm4'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Cooperlake-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Denverton-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='Dhyana-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='amd-psfd'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='auto-ibrs'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:36 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibpb-brtype'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='no-nested-data-bp'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='null-sel-clr-base'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='perfmon-v2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='prefetchi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbpb'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='stibp-always-on'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='EPYC-v5'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-128'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-256'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx10-512'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='prefetchiti'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Haswell-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='IvyBridge-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='KnightsMill-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-4fmaps'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-4vnniw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512er'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512pf'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fma4'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tbm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xop'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='amx-tile'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-bf16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-fp16'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bitalg'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vbmi2'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrc'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fzrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='la57'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='taa-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='tsx-ldtrk'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SierraForest'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='SierraForest-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ifma'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-ne-convert'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx-vnni-int8'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bhi-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='bus-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cmpccxadd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fbsdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='fsrs'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ibrs-all'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='intel-psfd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ipred-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='lam'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mcdt-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pbrsb-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='psdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rrsba-ctrl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='serialize'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vaes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='vpclmulqdq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='hle'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='rtm'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512bw'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512cd'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512dq'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512f'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='avx512vl'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='invpcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pcid'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='pku'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Snowridge'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='mpx'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v2'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v3'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='core-capability'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='split-lock-detect'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='Snowridge-v4'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='cldemote'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='erms'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='gfni'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdir64b'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='movdiri'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='xsaves'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='athlon'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='athlon-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='core2duo'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='core2duo-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='coreduo'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='coreduo-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='n270'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='n270-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='ss'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='phenom'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <blockers model='phenom-v1'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnow'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <feature name='3dnowext'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </blockers>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </mode>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  </cpu>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  <memoryBacking supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <enum name='sourceType'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <value>file</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <value>anonymous</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <value>memfd</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  </memoryBacking>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  <devices>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <disk supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='diskDevice'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>disk</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>cdrom</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>floppy</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>lun</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>fdc</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>sata</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </disk>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <graphics supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vnc</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>egl-headless</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </graphics>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <video supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='modelType'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vga</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>cirrus</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>none</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>bochs</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>ramfb</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </video>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <hostdev supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='mode'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>subsystem</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='startupPolicy'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>mandatory</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>requisite</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>optional</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='subsysType'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>pci</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>scsi</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='capsType'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='pciBackend'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </hostdev>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <rng supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio-transitional</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtio-non-transitional</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>random</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>egd</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </rng>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <filesystem supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='driverType'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>path</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>handle</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>virtiofs</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </filesystem>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <tpm supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>tpm-tis</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>tpm-crb</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>emulator</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>external</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='backendVersion'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>2.0</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </tpm>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <redirdev supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='bus'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>usb</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </redirdev>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <channel supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </channel>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <crypto supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='model'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>qemu</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='backendModel'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>builtin</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </crypto>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <interface supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='backendType'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>default</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>passt</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </interface>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <panic supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='model'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>isa</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>hyperv</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </panic>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <console supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='type'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>null</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vc</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>pty</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>dev</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>file</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>pipe</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>stdio</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>udp</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>tcp</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>unix</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>qemu-vdagent</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>dbus</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </console>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  </devices>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  <features>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <gic supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <vmcoreinfo supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <genid supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <backingStoreInput supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <backup supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <async-teardown supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <s390-pv supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <ps2 supported='yes'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <tdx supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <sev supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <sgx supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <hyperv supported='yes'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <enum name='features'>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>relaxed</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vapic</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>spinlocks</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vpindex</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>runtime</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>synic</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>stimer</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>reset</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>vendor_id</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>frequencies</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>reenlightenment</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>tlbflush</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>ipi</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>avic</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>emsr_bitmap</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <value>xmm_input</value>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </enum>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      <defaults>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <spinlocks>4095</spinlocks>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <stimer_direct>on</stimer_direct>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:      </defaults>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    </hyperv>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:    <launchSecurity supported='no'/>
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  </features>
Jan 31 02:07:37 np0005603541 nova_compute[245601]: </domainCapabilities>
Jan 31 02:07:37 np0005603541 nova_compute[245601]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.965 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.966 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.966 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.971 245605 INFO nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Secure Boot support detected#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.983 245605 INFO nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.984 245605 INFO nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.994 245605 DEBUG nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 02:07:37 np0005603541 nova_compute[245601]:  <model>Nehalem</model>
Jan 31 02:07:37 np0005603541 nova_compute[245601]: </cpu>
Jan 31 02:07:37 np0005603541 nova_compute[245601]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:36.997 245605 DEBUG nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.027 245605 INFO nova.virt.node [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Determined node identity 7666a20e-f730-4016-ad1a-a5df3a106dcd from /var/lib/nova/compute_id#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.055 245605 WARNING nova.compute.manager [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Compute nodes ['7666a20e-f730-4016-ad1a-a5df3a106dcd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 31 02:07:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.110 245605 INFO nova.compute.manager [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.153 245605 WARNING nova.compute.manager [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.154 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.154 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.154 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.155 245605 DEBUG nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.155 245605 DEBUG oslo_concurrency.processutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:07:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:07:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888886798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.590 245605 DEBUG oslo_concurrency.processutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:07:37 np0005603541 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 02:07:37 np0005603541 systemd[1]: Started libvirt nodedev daemon.
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.850 245605 WARNING nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.851 245605 DEBUG nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5237MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.851 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.852 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.870 245605 WARNING nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] No compute node record for compute-0.ctlplane.example.com:7666a20e-f730-4016-ad1a-a5df3a106dcd: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7666a20e-f730-4016-ad1a-a5df3a106dcd could not be found.#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.899 245605 INFO nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7666a20e-f730-4016-ad1a-a5df3a106dcd#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.966 245605 DEBUG nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:07:37 np0005603541 nova_compute[245601]: 2026-01-31 07:07:37.967 245605 DEBUG nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:07:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.118 245605 INFO nova.scheduler.client.report [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] [req-31b9caa9-0213-4e6e-8913-0a18250a7738] Created resource provider record via placement API for resource provider with UUID 7666a20e-f730-4016-ad1a-a5df3a106dcd and name compute-0.ctlplane.example.com.#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.247 245605 DEBUG oslo_concurrency.processutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:07:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:07:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174364683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.656 245605 DEBUG oslo_concurrency.processutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.661 245605 DEBUG nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 02:07:38 np0005603541 nova_compute[245601]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.662 245605 INFO nova.virt.libvirt.host [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.663 245605 DEBUG nova.compute.provider_tree [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Updating inventory in ProviderTree for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.663 245605 DEBUG nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.666 245605 DEBUG nova.virt.libvirt.driver [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Libvirt baseline CPU <cpu>
Jan 31 02:07:38 np0005603541 nova_compute[245601]:  <arch>x86_64</arch>
Jan 31 02:07:38 np0005603541 nova_compute[245601]:  <model>Nehalem</model>
Jan 31 02:07:38 np0005603541 nova_compute[245601]:  <vendor>AMD</vendor>
Jan 31 02:07:38 np0005603541 nova_compute[245601]:  <topology sockets="8" cores="1" threads="1"/>
Jan 31 02:07:38 np0005603541 nova_compute[245601]: </cpu>
Jan 31 02:07:38 np0005603541 nova_compute[245601]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Jan 31 02:07:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.737 245605 DEBUG nova.scheduler.client.report [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Updated inventory for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.738 245605 DEBUG nova.compute.provider_tree [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Updating resource provider 7666a20e-f730-4016-ad1a-a5df3a106dcd generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.738 245605 DEBUG nova.compute.provider_tree [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Updating inventory in ProviderTree for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.845 245605 DEBUG nova.compute.provider_tree [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Updating resource provider 7666a20e-f730-4016-ad1a-a5df3a106dcd generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.886 245605 DEBUG nova.compute.resource_tracker [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.886 245605 DEBUG oslo_concurrency.lockutils [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.886 245605 DEBUG nova.service [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 31 02:07:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.996 245605 DEBUG nova.service [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 31 02:07:38 np0005603541 nova_compute[245601]: 2026-01-31 07:07:38.997 245605 DEBUG nova.servicegroup.drivers.db [None req-2ac81dda-e24e-4efe-96cc-bf62996a175f - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 31 02:07:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:41 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1004 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:41.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1004 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.151643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262151704, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1399, "num_deletes": 257, "total_data_size": 1911056, "memory_usage": 1949440, "flush_reason": "Manual Compaction"}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262170477, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1869877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18655, "largest_seqno": 20053, "table_properties": {"data_size": 1863833, "index_size": 3055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15518, "raw_average_key_size": 20, "raw_value_size": 1850333, "raw_average_value_size": 2428, "num_data_blocks": 135, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843149, "oldest_key_time": 1769843149, "file_creation_time": 1769843262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18899 microseconds, and 5878 cpu microseconds.
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.170550) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1869877 bytes OK
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.170580) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.172929) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.172955) EVENT_LOG_v1 {"time_micros": 1769843262172948, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.172979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1904702, prev total WAL file size 1904702, number of live WAL files 2.
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.173812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1826KB)], [41(6850KB)]
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262173888, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8884563, "oldest_snapshot_seqno": -1}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5447 keys, 8684264 bytes, temperature: kUnknown
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262235522, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 8684264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8649597, "index_size": 19965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 141150, "raw_average_key_size": 25, "raw_value_size": 8552128, "raw_average_value_size": 1570, "num_data_blocks": 798, "num_entries": 5447, "num_filter_entries": 5447, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.235929) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 8684264 bytes
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.238285) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.7 rd, 140.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(9.4) write-amplify(4.6) OK, records in: 5974, records dropped: 527 output_compression: NoCompression
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.238585) EVENT_LOG_v1 {"time_micros": 1769843262238407, "job": 20, "event": "compaction_finished", "compaction_time_micros": 61811, "compaction_time_cpu_micros": 25052, "output_level": 6, "num_output_files": 1, "total_output_size": 8684264, "num_input_records": 5974, "num_output_records": 5447, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262239619, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262240894, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.173660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.241063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.241072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.241075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.241077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:42.241079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:43.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:45.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1009 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:47 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1009 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:07:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:48.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:07:49
Jan 31 02:07:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:07:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:07:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'vms']
Jan 31 02:07:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:07:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:51.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1014 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:52 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1014 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:07:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:07:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:07:55 np0005603541 nova_compute[245601]: 2026-01-31 07:07:54.999 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:07:55 np0005603541 nova_compute[245601]: 2026-01-31 07:07:55.073 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:07:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:55.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:07:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:56.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1019 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1019 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.320301) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277320414, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 424, "num_deletes": 251, "total_data_size": 292803, "memory_usage": 301832, "flush_reason": "Manual Compaction"}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277325791, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 288495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20054, "largest_seqno": 20477, "table_properties": {"data_size": 286133, "index_size": 462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6265, "raw_average_key_size": 19, "raw_value_size": 281262, "raw_average_value_size": 854, "num_data_blocks": 21, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843263, "oldest_key_time": 1769843263, "file_creation_time": 1769843277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5522 microseconds, and 1991 cpu microseconds.
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.325842) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 288495 bytes OK
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.325867) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.327605) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.327627) EVENT_LOG_v1 {"time_micros": 1769843277327621, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.327651) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 290172, prev total WAL file size 290172, number of live WAL files 2.
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.328098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(281KB)], [44(8480KB)]
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277328145, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8972759, "oldest_snapshot_seqno": -1}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5266 keys, 7228829 bytes, temperature: kUnknown
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277372783, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7228829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7196533, "index_size": 18042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 138161, "raw_average_key_size": 26, "raw_value_size": 7103246, "raw_average_value_size": 1348, "num_data_blocks": 712, "num_entries": 5266, "num_filter_entries": 5266, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.373039) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7228829 bytes
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.377026) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.7 rd, 161.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(56.2) write-amplify(25.1) OK, records in: 5776, records dropped: 510 output_compression: NoCompression
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.377068) EVENT_LOG_v1 {"time_micros": 1769843277377052, "job": 22, "event": "compaction_finished", "compaction_time_micros": 44713, "compaction_time_cpu_micros": 15926, "output_level": 6, "num_output_files": 1, "total_output_size": 7228829, "num_input_records": 5776, "num_output_records": 5266, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277377288, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277378368, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.328004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.378427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.378432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.378433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.378435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:07:57.378436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:07:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:57.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:07:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:58.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:07:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:07:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:07:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:07:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:59.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1024 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:02 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1024 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:02.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:03 np0005603541 podman[246154]: 2026-01-31 07:08:03.037670696 +0000 UTC m=+0.065588698 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:08:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:04 np0005603541 podman[246174]: 2026-01-31 07:08:04.085734057 +0000 UTC m=+0.119342043 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 02:08:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:06.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:06.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1029 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4169574238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1029 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:08.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:08 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:08:08 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4169574238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:08:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986348227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986348227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:09 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:10.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev bee027b5-e7d2-4923-8e40-4f17cf5201fa does not exist
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev e8cd14b7-3d8f-4784-9134-ed49c6be8902 does not exist
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 50e51efe-d843-47be-86db-406265174274 does not exist
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490018636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490018636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.668639898 +0000 UTC m=+0.041471283 container create 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:08:10 np0005603541 systemd[1]: Started libpod-conmon-472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2.scope.
Jan 31 02:08:10 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.647945298 +0000 UTC m=+0.020776703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.7596186 +0000 UTC m=+0.132450005 container init 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.765238338 +0000 UTC m=+0.138069723 container start 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:08:10 np0005603541 hopeful_vaughan[246493]: 167 167
Jan 31 02:08:10 np0005603541 systemd[1]: libpod-472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2.scope: Deactivated successfully.
Jan 31 02:08:10 np0005603541 conmon[246493]: conmon 472ca664c4a39fc107fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2.scope/container/memory.events
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.772713043 +0000 UTC m=+0.145544458 container attach 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.774547568 +0000 UTC m=+0.147378953 container died 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:08:10 np0005603541 systemd[1]: var-lib-containers-storage-overlay-50c41b8aefd7ce216cdef64bfd185de00e3f6154e005a52a573b016113ee833f-merged.mount: Deactivated successfully.
Jan 31 02:08:10 np0005603541 podman[246476]: 2026-01-31 07:08:10.854830817 +0000 UTC m=+0.227662202 container remove 472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:08:10 np0005603541 systemd[1]: libpod-conmon-472ca664c4a39fc107fc952af6e449cd72ce0915a1e21376f819fc84c3d912d2.scope: Deactivated successfully.
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:08:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:11 np0005603541 podman[246516]: 2026-01-31 07:08:11.018425268 +0000 UTC m=+0.045838091 container create edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:08:11 np0005603541 systemd[1]: Started libpod-conmon-edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210.scope.
Jan 31 02:08:11 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:11 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:11 np0005603541 podman[246516]: 2026-01-31 07:08:11.000188259 +0000 UTC m=+0.027601092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:11 np0005603541 podman[246516]: 2026-01-31 07:08:11.109483682 +0000 UTC m=+0.136896555 container init edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:08:11 np0005603541 podman[246516]: 2026-01-31 07:08:11.119345545 +0000 UTC m=+0.146758368 container start edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:08:11 np0005603541 podman[246516]: 2026-01-31 07:08:11.12358768 +0000 UTC m=+0.151000533 container attach edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:08:11 np0005603541 modest_northcutt[246533]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:08:11 np0005603541 modest_northcutt[246533]: --> relative data size: 1.0
Jan 31 02:08:11 np0005603541 modest_northcutt[246533]: --> All data devices are unavailable
Jan 31 02:08:12 np0005603541 systemd[1]: libpod-edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210.scope: Deactivated successfully.
Jan 31 02:08:12 np0005603541 podman[246516]: 2026-01-31 07:08:12.002933941 +0000 UTC m=+1.030346764 container died edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:08:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4eb138987c7638dd20fc56d9843a7a840ad864e60f211f8cc21828c088a3381d-merged.mount: Deactivated successfully.
Jan 31 02:08:12 np0005603541 podman[246516]: 2026-01-31 07:08:12.08604906 +0000 UTC m=+1.113461883 container remove edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:08:12 np0005603541 systemd[1]: libpod-conmon-edac84a1af574712268cda2fa5cf64bdcc5b6b48bc7b816a2ed3822bd77e8210.scope: Deactivated successfully.
Jan 31 02:08:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1034 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.664813704 +0000 UTC m=+0.051756687 container create 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:08:12 np0005603541 systemd[1]: Started libpod-conmon-499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5.scope.
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.638566117 +0000 UTC m=+0.025509150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:12 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.759733953 +0000 UTC m=+0.146676926 container init 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.771168985 +0000 UTC m=+0.158111978 container start 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:08:12 np0005603541 systemd[1]: libpod-499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5.scope: Deactivated successfully.
Jan 31 02:08:12 np0005603541 boring_agnesi[246719]: 167 167
Jan 31 02:08:12 np0005603541 conmon[246719]: conmon 499f921019adc9d226f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5.scope/container/memory.events
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.780432894 +0000 UTC m=+0.167375937 container attach 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.782145936 +0000 UTC m=+0.169088899 container died 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:08:12 np0005603541 systemd[1]: var-lib-containers-storage-overlay-c868c25b24cc4a63622bfa155310cac85c70d9470fc98808fdf0f211e33ddadb-merged.mount: Deactivated successfully.
Jan 31 02:08:12 np0005603541 podman[246703]: 2026-01-31 07:08:12.838905454 +0000 UTC m=+0.225848417 container remove 499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:08:12 np0005603541 systemd[1]: libpod-conmon-499f921019adc9d226f94b854f3139db074a30170848c8ab65b783110dc5abf5.scope: Deactivated successfully.
Jan 31 02:08:12 np0005603541 podman[246743]: 2026-01-31 07:08:12.992341206 +0000 UTC m=+0.044231592 container create 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:08:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:13 np0005603541 systemd[1]: Started libpod-conmon-04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574.scope.
Jan 31 02:08:13 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5367c695fa7157018e15877415e2f85bfd7ca0bd1dde94cb94837d79827640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5367c695fa7157018e15877415e2f85bfd7ca0bd1dde94cb94837d79827640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5367c695fa7157018e15877415e2f85bfd7ca0bd1dde94cb94837d79827640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:13 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5367c695fa7157018e15877415e2f85bfd7ca0bd1dde94cb94837d79827640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:12.973391318 +0000 UTC m=+0.025281724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:13.07203838 +0000 UTC m=+0.123928766 container init 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:13.078056548 +0000 UTC m=+0.129946934 container start 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:13.089043019 +0000 UTC m=+0.140933425 container attach 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:08:13 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1034 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]: {
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:    "0": [
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:        {
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "devices": [
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "/dev/loop3"
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            ],
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "lv_name": "ceph_lv0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "lv_size": "7511998464",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "name": "ceph_lv0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "tags": {
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.cluster_name": "ceph",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.crush_device_class": "",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.encrypted": "0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.osd_id": "0",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.type": "block",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:                "ceph.vdo": "0"
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            },
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "type": "block",
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:            "vg_name": "ceph_vg0"
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:        }
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]:    ]
Jan 31 02:08:13 np0005603541 naughty_wilbur[246759]: }
Jan 31 02:08:13 np0005603541 systemd[1]: libpod-04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574.scope: Deactivated successfully.
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:13.903965353 +0000 UTC m=+0.955855729 container died 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:08:13 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ef5367c695fa7157018e15877415e2f85bfd7ca0bd1dde94cb94837d79827640-merged.mount: Deactivated successfully.
Jan 31 02:08:13 np0005603541 podman[246743]: 2026-01-31 07:08:13.977723791 +0000 UTC m=+1.029614177 container remove 04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilbur, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:08:13 np0005603541 systemd[1]: libpod-conmon-04cf53afcd7fea8050a85e479340e8aa14385d4cb159135c49c2e066f446a574.scope: Deactivated successfully.
Jan 31 02:08:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.533097787 +0000 UTC m=+0.023418877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.650971263 +0000 UTC m=+0.141292373 container create e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:08:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:14 np0005603541 systemd[1]: Started libpod-conmon-e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27.scope.
Jan 31 02:08:14 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.809403337 +0000 UTC m=+0.299724437 container init e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.81682976 +0000 UTC m=+0.307150830 container start e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:08:14 np0005603541 objective_jackson[246941]: 167 167
Jan 31 02:08:14 np0005603541 systemd[1]: libpod-e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27.scope: Deactivated successfully.
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.834456905 +0000 UTC m=+0.324777975 container attach e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.835336626 +0000 UTC m=+0.325657696 container died e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:08:14 np0005603541 systemd[1]: var-lib-containers-storage-overlay-50d4e10a66b8a96c443cab2bf7d675c03a7d94a6924594120fbdbb5fe35987d5-merged.mount: Deactivated successfully.
Jan 31 02:08:14 np0005603541 podman[246925]: 2026-01-31 07:08:14.955401376 +0000 UTC m=+0.445722446 container remove e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:08:14 np0005603541 systemd[1]: libpod-conmon-e035569310c70411545326696bec82384b392259e336cbe3f9cda154dcedda27.scope: Deactivated successfully.
Jan 31 02:08:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:15 np0005603541 podman[246965]: 2026-01-31 07:08:15.169885791 +0000 UTC m=+0.098703753 container create 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:08:15 np0005603541 podman[246965]: 2026-01-31 07:08:15.101138457 +0000 UTC m=+0.029956379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:15 np0005603541 systemd[1]: Started libpod-conmon-71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb.scope.
Jan 31 02:08:15 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:08:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375ec76e260c4e762e89ea490be33cf4727da35b4cece45e016b4b853cb61ff1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375ec76e260c4e762e89ea490be33cf4727da35b4cece45e016b4b853cb61ff1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375ec76e260c4e762e89ea490be33cf4727da35b4cece45e016b4b853cb61ff1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:15 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375ec76e260c4e762e89ea490be33cf4727da35b4cece45e016b4b853cb61ff1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:15 np0005603541 podman[246965]: 2026-01-31 07:08:15.290266848 +0000 UTC m=+0.219084800 container init 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:08:15 np0005603541 podman[246965]: 2026-01-31 07:08:15.297916537 +0000 UTC m=+0.226734459 container start 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:08:15 np0005603541 podman[246965]: 2026-01-31 07:08:15.302527681 +0000 UTC m=+0.231345633 container attach 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:08:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]: {
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:        "osd_id": 0,
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:        "type": "bluestore"
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]:    }
Jan 31 02:08:16 np0005603541 gifted_faraday[246982]: }
Jan 31 02:08:16 np0005603541 systemd[1]: libpod-71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb.scope: Deactivated successfully.
Jan 31 02:08:16 np0005603541 podman[246965]: 2026-01-31 07:08:16.17775115 +0000 UTC m=+1.106569072 container died 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:08:16 np0005603541 systemd[1]: var-lib-containers-storage-overlay-375ec76e260c4e762e89ea490be33cf4727da35b4cece45e016b4b853cb61ff1-merged.mount: Deactivated successfully.
Jan 31 02:08:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:16 np0005603541 podman[246965]: 2026-01-31 07:08:16.60908788 +0000 UTC m=+1.537905802 container remove 71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:08:16 np0005603541 systemd[1]: libpod-conmon-71307b5478ef263756ad0b47941516ee16b5d1e2149675ab26f5cdcde41399cb.scope: Deactivated successfully.
Jan 31 02:08:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:08:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:16 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:08:16 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:16 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 7dcc9963-a7bd-4cd5-9431-a21e91abd1fd does not exist
Jan 31 02:08:16 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 9d9d0ba3-e180-4ccd-95e9-8591e7b8ab73 does not exist
Jan 31 02:08:16 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev bb2653f6-f106-4433-bc82-40228e4d004f does not exist
Jan 31 02:08:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:08:17 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:18.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:19.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:08:20.137 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:08:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:08:20.138 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:08:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:08:20.138 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:08:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1044 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:22 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1044 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:25.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:27 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:29.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:31.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:32.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:32 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:33.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:34 np0005603541 podman[247127]: 2026-01-31 07:08:34.019924078 +0000 UTC m=+0.056539704 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:08:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:34.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.632 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.632 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.633 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.633 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.737 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.737 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.738 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.738 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.739 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.739 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.740 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.740 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.741 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:08:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.799 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.800 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.800 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.800 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:08:34 np0005603541 nova_compute[245601]: 2026-01-31 07:08:34.801 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:08:35 np0005603541 podman[247168]: 2026-01-31 07:08:35.028474024 +0000 UTC m=+0.067377011 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 02:08:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:35 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:08:35 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1747668994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.316 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.450 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.451 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5201MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.451 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.451 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.582 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.583 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:08:35 np0005603541 nova_compute[245601]: 2026-01-31 07:08:35.601 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:08:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:08:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323485511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:08:36 np0005603541 nova_compute[245601]: 2026-01-31 07:08:36.019 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:08:36 np0005603541 nova_compute[245601]: 2026-01-31 07:08:36.024 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:08:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:36.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:36 np0005603541 nova_compute[245601]: 2026-01-31 07:08:36.054 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:08:36 np0005603541 nova_compute[245601]: 2026-01-31 07:08:36.056 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:08:36 np0005603541 nova_compute[245601]: 2026-01-31 07:08:36.056 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:08:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:37.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1059 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:37 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1059 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:38.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:41.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:42.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1064 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:42 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1064 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:45.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:46.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:47.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1069 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:47 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1069 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:08:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:49.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:08:49
Jan 31 02:08:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:08:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:08:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'backups']
Jan 31 02:08:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:08:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:50.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:51.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1074 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:52 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1074 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:54.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:08:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:55.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:56.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:57.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1079 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:57 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1079 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:08:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:58.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:08:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:08:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:08:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:59.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:00.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:01.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:02.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1084 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:02 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1084 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:03.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:04.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:05 np0005603541 podman[247333]: 2026-01-31 07:09:05.059384605 +0000 UTC m=+0.092439089 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 02:09:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:05.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:05 np0005603541 podman[247352]: 2026-01-31 07:09:05.155251447 +0000 UTC m=+0.091767562 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:09:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1089 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:09:07 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8544 writes, 33K keys, 8544 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8544 writes, 1756 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 662 writes, 1039 keys, 662 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 662 writes, 314 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Jan 31 02:09:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:07 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1089 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:10.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:09:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:12.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1094 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:13 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1094 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:14.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:17.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1099 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:09:17 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1099 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 5bb384d7-98f4-4a99-816a-6f2fbcdad3de does not exist
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d3a5567e-d181-4795-921e-b266d07ca4ef does not exist
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 5581ab67-dc6a-430d-b419-c3a426be48b1 does not exist
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:09:18 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.803662839 +0000 UTC m=+0.041987966 container create 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:09:18 np0005603541 systemd[1]: Started libpod-conmon-8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6.scope.
Jan 31 02:09:18 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.879447387 +0000 UTC m=+0.117772574 container init 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.785396269 +0000 UTC m=+0.023721376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.890438988 +0000 UTC m=+0.128764075 container start 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:09:18 np0005603541 focused_cartwright[247844]: 167 167
Jan 31 02:09:18 np0005603541 systemd[1]: libpod-8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6.scope: Deactivated successfully.
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.896423326 +0000 UTC m=+0.134748463 container attach 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.897398319 +0000 UTC m=+0.135723416 container died 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:09:18 np0005603541 systemd[1]: var-lib-containers-storage-overlay-3ca6a3782b9c86cbc71e85b932693831fb11896d326ba15d000fda0395d908b1-merged.mount: Deactivated successfully.
Jan 31 02:09:18 np0005603541 podman[247829]: 2026-01-31 07:09:18.949463082 +0000 UTC m=+0.187788179 container remove 8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cartwright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:09:18 np0005603541 systemd[1]: libpod-conmon-8ec9e4d17ec6a077dfd5292d22909634c3a22f0918f3759ed232482eb23351b6.scope: Deactivated successfully.
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.088560001 +0000 UTC m=+0.041980856 container create 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:09:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:09:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:19 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:09:19 np0005603541 systemd[1]: Started libpod-conmon-8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06.scope.
Jan 31 02:09:19 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:19 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.162186865 +0000 UTC m=+0.115607750 container init 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.070342452 +0000 UTC m=+0.023763307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.168276336 +0000 UTC m=+0.121697191 container start 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.173491463 +0000 UTC m=+0.126912348 container attach 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:09:19 np0005603541 happy_boyd[247885]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:09:19 np0005603541 happy_boyd[247885]: --> relative data size: 1.0
Jan 31 02:09:19 np0005603541 happy_boyd[247885]: --> All data devices are unavailable
Jan 31 02:09:19 np0005603541 systemd[1]: libpod-8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06.scope: Deactivated successfully.
Jan 31 02:09:19 np0005603541 podman[247869]: 2026-01-31 07:09:19.939144614 +0000 UTC m=+0.892565509 container died 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:09:19 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1d4a8e4a53a29ad1551b6e03f50589bc85b056f2c51059b4257b2dc838086784-merged.mount: Deactivated successfully.
Jan 31 02:09:20 np0005603541 podman[247869]: 2026-01-31 07:09:19.999979443 +0000 UTC m=+0.953400298 container remove 8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:09:20 np0005603541 systemd[1]: libpod-conmon-8dce1d27dc1df8a341eacc41c9e84ab6f8800f634852c73aef668c787afa3a06.scope: Deactivated successfully.
Jan 31 02:09:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:20.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:20.138 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:09:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:20.140 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:09:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:20.140 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:09:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.545452246 +0000 UTC m=+0.045226355 container create 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:20 np0005603541 systemd[1]: Started libpod-conmon-5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1.scope.
Jan 31 02:09:20 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.529509804 +0000 UTC m=+0.029283903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.628130784 +0000 UTC m=+0.127904903 container init 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.634892571 +0000 UTC m=+0.134666650 container start 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:09:20 np0005603541 brave_raman[248067]: 167 167
Jan 31 02:09:20 np0005603541 systemd[1]: libpod-5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1.scope: Deactivated successfully.
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.64539581 +0000 UTC m=+0.145169989 container attach 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.645900182 +0000 UTC m=+0.145674301 container died 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:09:20 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7d3801fc49e1eac873f794a9ae5bd9a71a526b12696dcd7ce1d96097ba5df7a6-merged.mount: Deactivated successfully.
Jan 31 02:09:20 np0005603541 podman[248051]: 2026-01-31 07:09:20.71152918 +0000 UTC m=+0.211303299 container remove 5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:09:20 np0005603541 systemd[1]: libpod-conmon-5103a609a7b9c5ae822cd9ed3b74eae9c0c212f0cc22d6002fb25131e4b346d1.scope: Deactivated successfully.
Jan 31 02:09:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:20 np0005603541 podman[248091]: 2026-01-31 07:09:20.849775577 +0000 UTC m=+0.044733383 container create 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:20 np0005603541 systemd[1]: Started libpod-conmon-989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0.scope.
Jan 31 02:09:20 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88595c79811a2b2f1a233da372a09a9e4dec95d91f8bd1af5109b5ae5ae333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88595c79811a2b2f1a233da372a09a9e4dec95d91f8bd1af5109b5ae5ae333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88595c79811a2b2f1a233da372a09a9e4dec95d91f8bd1af5109b5ae5ae333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:20 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88595c79811a2b2f1a233da372a09a9e4dec95d91f8bd1af5109b5ae5ae333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:20 np0005603541 podman[248091]: 2026-01-31 07:09:20.831972348 +0000 UTC m=+0.026930164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:20 np0005603541 podman[248091]: 2026-01-31 07:09:20.952540289 +0000 UTC m=+0.147498115 container init 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:20 np0005603541 podman[248091]: 2026-01-31 07:09:20.961135761 +0000 UTC m=+0.156093557 container start 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:09:20 np0005603541 podman[248091]: 2026-01-31 07:09:20.974198803 +0000 UTC m=+0.169156639 container attach 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:09:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]: {
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:    "0": [
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:        {
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "devices": [
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "/dev/loop3"
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            ],
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "lv_name": "ceph_lv0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "lv_size": "7511998464",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "name": "ceph_lv0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "tags": {
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.cluster_name": "ceph",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.crush_device_class": "",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.encrypted": "0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.osd_id": "0",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.type": "block",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:                "ceph.vdo": "0"
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            },
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "type": "block",
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:            "vg_name": "ceph_vg0"
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:        }
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]:    ]
Jan 31 02:09:21 np0005603541 cranky_lovelace[248108]: }
Jan 31 02:09:21 np0005603541 systemd[1]: libpod-989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0.scope: Deactivated successfully.
Jan 31 02:09:21 np0005603541 podman[248091]: 2026-01-31 07:09:21.656804666 +0000 UTC m=+0.851762452 container died 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:09:21 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8d88595c79811a2b2f1a233da372a09a9e4dec95d91f8bd1af5109b5ae5ae333-merged.mount: Deactivated successfully.
Jan 31 02:09:21 np0005603541 podman[248091]: 2026-01-31 07:09:21.719231005 +0000 UTC m=+0.914188801 container remove 989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:09:21 np0005603541 systemd[1]: libpod-conmon-989358097e05ec3e01555639763a3471907f8c7edc113d2975c723e1d64735f0.scope: Deactivated successfully.
Jan 31 02:09:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:22.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1104 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.369556092 +0000 UTC m=+0.047653695 container create d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:09:22 np0005603541 systemd[1]: Started libpod-conmon-d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed.scope.
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.348126065 +0000 UTC m=+0.026223678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.467930237 +0000 UTC m=+0.146027820 container init d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.476535109 +0000 UTC m=+0.154632722 container start d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:09:22 np0005603541 kind_poitras[248290]: 167 167
Jan 31 02:09:22 np0005603541 systemd[1]: libpod-d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed.scope: Deactivated successfully.
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.486798352 +0000 UTC m=+0.164896025 container attach d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.487432007 +0000 UTC m=+0.165529620 container died d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:09:22 np0005603541 systemd[1]: var-lib-containers-storage-overlay-da309fd18da124dc8de5497bb44608ed1866181c8f8025cfa5e0dd288ca516b7-merged.mount: Deactivated successfully.
Jan 31 02:09:22 np0005603541 podman[248273]: 2026-01-31 07:09:22.545115309 +0000 UTC m=+0.223212922 container remove d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:09:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:22 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1104 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:22 np0005603541 systemd[1]: libpod-conmon-d6de5411544de4b83486d327faa64b5287740d161d42910b4f4c8f3e0f1418ed.scope: Deactivated successfully.
Jan 31 02:09:22 np0005603541 podman[248316]: 2026-01-31 07:09:22.740358071 +0000 UTC m=+0.069342360 container create 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:09:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:22 np0005603541 systemd[1]: Started libpod-conmon-94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a.scope.
Jan 31 02:09:22 np0005603541 podman[248316]: 2026-01-31 07:09:22.712361031 +0000 UTC m=+0.041345390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:09:22 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:09:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975b5ac0ef6f59bedca09122804fbdf5a56a01fee268c4210e735e11bf9023b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975b5ac0ef6f59bedca09122804fbdf5a56a01fee268c4210e735e11bf9023b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975b5ac0ef6f59bedca09122804fbdf5a56a01fee268c4210e735e11bf9023b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:22 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975b5ac0ef6f59bedca09122804fbdf5a56a01fee268c4210e735e11bf9023b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:09:22 np0005603541 podman[248316]: 2026-01-31 07:09:22.836629824 +0000 UTC m=+0.165614123 container init 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:09:22 np0005603541 podman[248316]: 2026-01-31 07:09:22.846554359 +0000 UTC m=+0.175538668 container start 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:22 np0005603541 podman[248316]: 2026-01-31 07:09:22.85313835 +0000 UTC m=+0.182122629 container attach 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:09:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:23 np0005603541 laughing_elion[248334]: {
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:        "osd_id": 0,
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:        "type": "bluestore"
Jan 31 02:09:23 np0005603541 laughing_elion[248334]:    }
Jan 31 02:09:23 np0005603541 laughing_elion[248334]: }
Jan 31 02:09:23 np0005603541 systemd[1]: libpod-94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a.scope: Deactivated successfully.
Jan 31 02:09:23 np0005603541 podman[248316]: 2026-01-31 07:09:23.639931171 +0000 UTC m=+0.968915470 container died 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:09:23 np0005603541 systemd[1]: var-lib-containers-storage-overlay-975b5ac0ef6f59bedca09122804fbdf5a56a01fee268c4210e735e11bf9023b0-merged.mount: Deactivated successfully.
Jan 31 02:09:23 np0005603541 podman[248316]: 2026-01-31 07:09:23.708721888 +0000 UTC m=+1.037706147 container remove 94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:09:23 np0005603541 systemd[1]: libpod-conmon-94e9febcd8c0da6574a4c08163c39722dde8cf36b779fb35ad52e4cdf8fa027a.scope: Deactivated successfully.
Jan 31 02:09:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:09:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:23 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:09:23 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:23 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d03ff481-861c-4831-a621-49ccd495a232 does not exist
Jan 31 02:09:23 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 91b37df7-1b2a-4f79-8f83-fdeeadf4fc42 does not exist
Jan 31 02:09:23 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 8969bb20-8617-43fd-85be-074730f32933 does not exist
Jan 31 02:09:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:24 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:24 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:09:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1109 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:27 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1109 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:28.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:29.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:30.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:32.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:32 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:33.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:34.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.055 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.056 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 podman[248427]: 2026-01-31 07:09:36.076699693 +0000 UTC m=+0.101638607 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.083 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.084 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.084 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:09:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:36.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:36 np0005603541 podman[248426]: 2026-01-31 07:09:36.092579274 +0000 UTC m=+0.117580229 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.099 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.099 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.099 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.100 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.100 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.100 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.654 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.655 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.655 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.655 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:09:36 np0005603541 nova_compute[245601]: 2026-01-31 07:09:36.656 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:09:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918497747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.108 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:09:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.263 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.264 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5215MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.264 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.264 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.340 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.340 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.359 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794741233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.796 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.802 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.820 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.822 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:09:37 np0005603541 nova_compute[245601]: 2026-01-31 07:09:37.822 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:38.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:39 np0005603541 ceph-mgr[74648]: [devicehealth INFO root] Check health
Jan 31 02:09:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:42.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:42 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:44.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:45.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:48 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:48.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:09:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:09:49
Jan 31 02:09:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:09:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:09:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', '.mgr', 'images', 'default.rgw.meta', '.rgw.root']
Jan 31 02:09:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:09:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:49.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:50.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:52.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:52 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:54.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:09:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:55 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:55.453 158874 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c2:21:78', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:5e:fd:5b:c6:c6'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:09:55 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:55.455 158874 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:09:55 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:09:55.457 158874 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e3f3772b-46c1-4a7f-ae43-0efc80b30197, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:09:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:56.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:57 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:09:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:09:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:09:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:09:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 02:10:00 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 02:10:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:00.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:00 np0005603541 ceph-mon[74355]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 02:10:00 np0005603541 ceph-mon[74355]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 02:10:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:02.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:02 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:04.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:06.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:07 np0005603541 podman[248629]: 2026-01-31 07:10:07.039925552 +0000 UTC m=+0.077443005 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:10:07 np0005603541 podman[248630]: 2026-01-31 07:10:07.047300953 +0000 UTC m=+0.080975242 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 02:10:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:07.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:07 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:10:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:11.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:11 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:12 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:13.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:16.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:17.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:17 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:18.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:19.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:10:20.139 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:10:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:10:20.140 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:10:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:10:20.140 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:10:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:20.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:21.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:22.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:22 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:23.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:24.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:10:24 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:10:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:25 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev eed60eda-c062-473e-beb5-5a4472b9c947 does not exist
Jan 31 02:10:25 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 14c4ccdc-1eee-4f2b-92bb-6cdc42bb4878 does not exist
Jan 31 02:10:25 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 846b21da-c948-4573-a1b2-b183eaa3c5bc does not exist
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:10:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:25 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.643453875 +0000 UTC m=+0.036880608 container create 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:10:25 np0005603541 systemd[1]: Started libpod-conmon-6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef.scope.
Jan 31 02:10:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.626316468 +0000 UTC m=+0.019743251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.742274331 +0000 UTC m=+0.135701084 container init 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.749994039 +0000 UTC m=+0.143420812 container start 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.754030597 +0000 UTC m=+0.147457350 container attach 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:10:25 np0005603541 inspiring_matsumoto[249024]: 167 167
Jan 31 02:10:25 np0005603541 systemd[1]: libpod-6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef.scope: Deactivated successfully.
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.756524468 +0000 UTC m=+0.149951241 container died 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:10:25 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1e42ad9cba90a37e30ead92aa501cb823c1c2b307e4fe615aad0319fc76ec7de-merged.mount: Deactivated successfully.
Jan 31 02:10:25 np0005603541 podman[249007]: 2026-01-31 07:10:25.805531361 +0000 UTC m=+0.198958094 container remove 6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:10:25 np0005603541 systemd[1]: libpod-conmon-6f06d4cc41e9d74461161e52a5dc4a4927b08cf7281a3d7a3c5cb6ff0eab3cef.scope: Deactivated successfully.
Jan 31 02:10:25 np0005603541 podman[249048]: 2026-01-31 07:10:25.954280202 +0000 UTC m=+0.036605062 container create a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:10:25 np0005603541 systemd[1]: Started libpod-conmon-a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52.scope.
Jan 31 02:10:25 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:26 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:26.016436005 +0000 UTC m=+0.098760935 container init a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:26.024817749 +0000 UTC m=+0.107142609 container start a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:26.02857058 +0000 UTC m=+0.110895530 container attach a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:25.937216296 +0000 UTC m=+0.019541186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:26.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:26 np0005603541 confident_ishizaka[249064]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:10:26 np0005603541 confident_ishizaka[249064]: --> relative data size: 1.0
Jan 31 02:10:26 np0005603541 confident_ishizaka[249064]: --> All data devices are unavailable
Jan 31 02:10:26 np0005603541 systemd[1]: libpod-a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52.scope: Deactivated successfully.
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:26.834871717 +0000 UTC m=+0.917196637 container died a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:10:26 np0005603541 systemd[1]: var-lib-containers-storage-overlay-04be0f18ce77b578bc98de14de377ba7b738c6ee5464521d2b283cda169fb98d-merged.mount: Deactivated successfully.
Jan 31 02:10:26 np0005603541 podman[249048]: 2026-01-31 07:10:26.884372411 +0000 UTC m=+0.966697271 container remove a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:10:26 np0005603541 systemd[1]: libpod-conmon-a2a53cc174ce54dff828ad05655c802bfeb677a1ef6111154c02cec58ac73a52.scope: Deactivated successfully.
Jan 31 02:10:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1169 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.444972518 +0000 UTC m=+0.047633030 container create 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:10:27 np0005603541 systemd[1]: Started libpod-conmon-9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2.scope.
Jan 31 02:10:27 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.420752539 +0000 UTC m=+0.023413121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.518126449 +0000 UTC m=+0.120786951 container init 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.525695012 +0000 UTC m=+0.128355514 container start 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.52927817 +0000 UTC m=+0.131938702 container attach 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:10:27 np0005603541 vigilant_dewdney[249251]: 167 167
Jan 31 02:10:27 np0005603541 systemd[1]: libpod-9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2.scope: Deactivated successfully.
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.531103154 +0000 UTC m=+0.133763666 container died 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:10:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:27 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1169 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:27 np0005603541 systemd[1]: var-lib-containers-storage-overlay-770a1c7e0b3088f950af3445ac1e2282150b7598aeff8842811ef62ee7ace4c3-merged.mount: Deactivated successfully.
Jan 31 02:10:27 np0005603541 podman[249235]: 2026-01-31 07:10:27.573680351 +0000 UTC m=+0.176340873 container remove 9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:10:27 np0005603541 systemd[1]: libpod-conmon-9ce10ea21f101ccbc967947fdf4280efaa095551be3db9b0c6bef7885e13d6b2.scope: Deactivated successfully.
Jan 31 02:10:27 np0005603541 podman[249276]: 2026-01-31 07:10:27.71248321 +0000 UTC m=+0.048662686 container create 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:10:27 np0005603541 systemd[1]: Started libpod-conmon-5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567.scope.
Jan 31 02:10:27 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1a4122cfcddbc5e352a571b632853efcc5c3a0ed9d97751a715aaead90a8d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1a4122cfcddbc5e352a571b632853efcc5c3a0ed9d97751a715aaead90a8d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1a4122cfcddbc5e352a571b632853efcc5c3a0ed9d97751a715aaead90a8d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:27 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1a4122cfcddbc5e352a571b632853efcc5c3a0ed9d97751a715aaead90a8d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:27 np0005603541 podman[249276]: 2026-01-31 07:10:27.693642211 +0000 UTC m=+0.029821707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:27 np0005603541 podman[249276]: 2026-01-31 07:10:27.796431043 +0000 UTC m=+0.132610539 container init 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:10:27 np0005603541 podman[249276]: 2026-01-31 07:10:27.803355822 +0000 UTC m=+0.139535338 container start 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:10:27 np0005603541 podman[249276]: 2026-01-31 07:10:27.811028868 +0000 UTC m=+0.147208374 container attach 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:10:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:28.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]: {
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:    "0": [
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:        {
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "devices": [
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "/dev/loop3"
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            ],
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "lv_name": "ceph_lv0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "lv_size": "7511998464",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "name": "ceph_lv0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "tags": {
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.cluster_name": "ceph",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.crush_device_class": "",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.encrypted": "0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.osd_id": "0",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.type": "block",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:                "ceph.vdo": "0"
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            },
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "type": "block",
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:            "vg_name": "ceph_vg0"
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:        }
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]:    ]
Jan 31 02:10:28 np0005603541 goofy_banzai[249293]: }
Jan 31 02:10:28 np0005603541 systemd[1]: libpod-5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567.scope: Deactivated successfully.
Jan 31 02:10:28 np0005603541 podman[249276]: 2026-01-31 07:10:28.610227392 +0000 UTC m=+0.946406938 container died 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:10:28 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ca1a4122cfcddbc5e352a571b632853efcc5c3a0ed9d97751a715aaead90a8d9-merged.mount: Deactivated successfully.
Jan 31 02:10:28 np0005603541 podman[249276]: 2026-01-31 07:10:28.687722758 +0000 UTC m=+1.023902234 container remove 5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:10:28 np0005603541 systemd[1]: libpod-conmon-5e30a47984baf3a136a6d65b09169232424424bf55bae81b99710e8c165d8567.scope: Deactivated successfully.
Jan 31 02:10:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:29.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.254239529 +0000 UTC m=+0.046465472 container create f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:10:29 np0005603541 systemd[1]: Started libpod-conmon-f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2.scope.
Jan 31 02:10:29 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.322542452 +0000 UTC m=+0.114768405 container init f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.326502968 +0000 UTC m=+0.118728901 container start f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.32944881 +0000 UTC m=+0.121674743 container attach f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.232211302 +0000 UTC m=+0.024437255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:29 np0005603541 clever_moser[249474]: 167 167
Jan 31 02:10:29 np0005603541 systemd[1]: libpod-f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2.scope: Deactivated successfully.
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.332444883 +0000 UTC m=+0.124670856 container died f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:10:29 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9eded67719d46264e41827d9bc8da209bf7c184eba7815812c3bc2d77861fd24-merged.mount: Deactivated successfully.
Jan 31 02:10:29 np0005603541 podman[249457]: 2026-01-31 07:10:29.372156639 +0000 UTC m=+0.164382572 container remove f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:10:29 np0005603541 systemd[1]: libpod-conmon-f330f9460baa7c3b5a6308fc0e01702ac86322c8bb6820bb6e69b69bb161a2f2.scope: Deactivated successfully.
Jan 31 02:10:29 np0005603541 podman[249499]: 2026-01-31 07:10:29.505814833 +0000 UTC m=+0.040311193 container create d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:10:29 np0005603541 systemd[1]: Started libpod-conmon-d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b.scope.
Jan 31 02:10:29 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:10:29 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631907dacba38e3e54854dbca4ad1d8d17c91665319224536e7dd43c6383dc42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:29 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631907dacba38e3e54854dbca4ad1d8d17c91665319224536e7dd43c6383dc42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:29 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631907dacba38e3e54854dbca4ad1d8d17c91665319224536e7dd43c6383dc42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:29 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631907dacba38e3e54854dbca4ad1d8d17c91665319224536e7dd43c6383dc42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:10:29 np0005603541 podman[249499]: 2026-01-31 07:10:29.488503341 +0000 UTC m=+0.022999701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:10:29 np0005603541 podman[249499]: 2026-01-31 07:10:29.599942824 +0000 UTC m=+0.134439234 container init d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:10:29 np0005603541 podman[249499]: 2026-01-31 07:10:29.614050647 +0000 UTC m=+0.148547007 container start d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:10:29 np0005603541 podman[249499]: 2026-01-31 07:10:29.61706433 +0000 UTC m=+0.151560690 container attach d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 02:10:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:30.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]: {
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:        "osd_id": 0,
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:        "type": "bluestore"
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]:    }
Jan 31 02:10:30 np0005603541 gallant_blackwell[249516]: }
Jan 31 02:10:30 np0005603541 systemd[1]: libpod-d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b.scope: Deactivated successfully.
Jan 31 02:10:30 np0005603541 podman[249499]: 2026-01-31 07:10:30.38769022 +0000 UTC m=+0.922186610 container died d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:10:30 np0005603541 systemd[1]: var-lib-containers-storage-overlay-631907dacba38e3e54854dbca4ad1d8d17c91665319224536e7dd43c6383dc42-merged.mount: Deactivated successfully.
Jan 31 02:10:30 np0005603541 podman[249499]: 2026-01-31 07:10:30.468961468 +0000 UTC m=+1.003457828 container remove d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:10:30 np0005603541 systemd[1]: libpod-conmon-d422e951d39fb2e6f86599052fa2b0604656c4d0a3a3d944e35494ca98e7006b.scope: Deactivated successfully.
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 27516eef-349b-4131-9811-789eb35d56bd does not exist
Jan 31 02:10:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 83681422-c2df-48ca-9897-2c8c74fa9f14 does not exist
Jan 31 02:10:30 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d3ac5b57-9bce-41db-b78f-357cf1fbc2c9 does not exist
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:30 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:10:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:32.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:32 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:32 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:33.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:35 np0005603541 nova_compute[245601]: 2026-01-31 07:10:35.822 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:35 np0005603541 nova_compute[245601]: 2026-01-31 07:10:35.823 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:35 np0005603541 nova_compute[245601]: 2026-01-31 07:10:35.824 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:35 np0005603541 nova_compute[245601]: 2026-01-31 07:10:35.824 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:10:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:10:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.859 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.860 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.860 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.860 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.889202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436889323, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2226, "num_deletes": 251, "total_data_size": 3210034, "memory_usage": 3270400, "flush_reason": "Manual Compaction"}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.893 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.894 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.894 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.895 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:10:36 np0005603541 nova_compute[245601]: 2026-01-31 07:10:36.895 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436905531, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 3114768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20478, "largest_seqno": 22703, "table_properties": {"data_size": 3105633, "index_size": 5245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23625, "raw_average_key_size": 21, "raw_value_size": 3085281, "raw_average_value_size": 2792, "num_data_blocks": 228, "num_entries": 1105, "num_filter_entries": 1105, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843278, "oldest_key_time": 1769843278, "file_creation_time": 1769843436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16424 microseconds, and 7617 cpu microseconds.
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.905642) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 3114768 bytes OK
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.905673) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.906996) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.907019) EVENT_LOG_v1 {"time_micros": 1769843436907011, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.907045) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3200609, prev total WAL file size 3200609, number of live WAL files 2.
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.908209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(3041KB)], [47(7059KB)]
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436908281, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10343597, "oldest_snapshot_seqno": -1}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5852 keys, 8664672 bytes, temperature: kUnknown
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436946459, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8664672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8627940, "index_size": 21006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 152108, "raw_average_key_size": 25, "raw_value_size": 8523587, "raw_average_value_size": 1456, "num_data_blocks": 838, "num_entries": 5852, "num_filter_entries": 5852, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.946923) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8664672 bytes
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.950851) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 270.3 rd, 226.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.9 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6371, records dropped: 519 output_compression: NoCompression
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.950886) EVENT_LOG_v1 {"time_micros": 1769843436950869, "job": 24, "event": "compaction_finished", "compaction_time_micros": 38274, "compaction_time_cpu_micros": 16873, "output_level": 6, "num_output_files": 1, "total_output_size": 8664672, "num_input_records": 6371, "num_output_records": 5852, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436951518, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436952626, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.908100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.952707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.952715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.952718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.952721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:36 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:10:36.952724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1179 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623330012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.318 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.459 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.461 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.461 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.461 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.543 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.543 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:10:37 np0005603541 nova_compute[245601]: 2026-01-31 07:10:37.564 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1179 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:37 np0005603541 podman[249670]: 2026-01-31 07:10:37.906621674 +0000 UTC m=+0.066349947 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 02:10:37 np0005603541 podman[249671]: 2026-01-31 07:10:37.914800713 +0000 UTC m=+0.074309451 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:10:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713155213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:10:38 np0005603541 nova_compute[245601]: 2026-01-31 07:10:38.013 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:10:38 np0005603541 nova_compute[245601]: 2026-01-31 07:10:38.016 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:10:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:38 np0005603541 nova_compute[245601]: 2026-01-31 07:10:38.260 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:10:38 np0005603541 nova_compute[245601]: 2026-01-31 07:10:38.261 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:10:38 np0005603541 nova_compute[245601]: 2026-01-31 07:10:38.261 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:10:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:39.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:40 np0005603541 nova_compute[245601]: 2026-01-31 07:10:40.027 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:40 np0005603541 nova_compute[245601]: 2026-01-31 07:10:40.027 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:10:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:41.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:43 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:43.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:45.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:46.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1189 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:47.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:47 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1189 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:48.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:10:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:10:49
Jan 31 02:10:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:10:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:10:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root']
Jan 31 02:10:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:10:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:49 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:50.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:52.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1194 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:52 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1194 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:10:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:56.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1199 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:10:58 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1199 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:10:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:10:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:10:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:00 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:01 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1204 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:02 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:02 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1204 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:03 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:04 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:05.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:05 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:06 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1209 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:07.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:07 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:07 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1209 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:08 np0005603541 podman[249809]: 2026-01-31 07:11:08.017684042 +0000 UTC m=+0.052189081 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:11:08 np0005603541 podman[249808]: 2026-01-31 07:11:08.077269833 +0000 UTC m=+0.112906880 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:11:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:08.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:08 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:09.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 radosgw[93037]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 02:11:09 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:10.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:11:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 02:11:10 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:11.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:12.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:12 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:12 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 02:11:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:13.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:13 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:14 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 31 02:11:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:15 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:16.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:16 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 31 02:11:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1219 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:17.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:17 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:17 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1219 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:18.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:18 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 31 02:11:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:19 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:11:20.141 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:11:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:11:20.141 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:11:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:11:20.141 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:11:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:20.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:20 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:21.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=404 latency=0.005000118s ======
Jan 31 02:11:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:21.498 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.005000118s
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - - [31/Jan/2026:07:11:21.517 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000023s
Jan 31 02:11:21 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1224 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:22 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:22 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1224 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Jan 31 02:11:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:23 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:24 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Jan 31 02:11:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:25.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:25 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 02:11:26 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:26 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 02:11:26 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 02:11:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1229 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:27 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1229 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:28 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 1.0 MiB/s wr, 9 op/s
Jan 31 02:11:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:29.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:29 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:30 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 6.4 KiB/s rd, 1.0 MiB/s wr, 9 op/s
Jan 31 02:11:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:31.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:31 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 2aa716a2-8a48-44bb-ab5e-3e43fe56ffde does not exist
Jan 31 02:11:31 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 0bbe9a92-8d33-4c67-a30e-2e559a79f9e7 does not exist
Jan 31 02:11:31 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev f82586e0-36b7-4d89-8b8b-95150fce8afb does not exist
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:31 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.093696188 +0000 UTC m=+0.043325525 container create f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:11:32 np0005603541 systemd[1]: Started libpod-conmon-f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518.scope.
Jan 31 02:11:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.167547336 +0000 UTC m=+0.117176753 container init f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.071811005 +0000 UTC m=+0.021440382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.174247648 +0000 UTC m=+0.123877005 container start f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.177254343 +0000 UTC m=+0.126883700 container attach f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:11:32 np0005603541 exciting_panini[250202]: 167 167
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.17840077 +0000 UTC m=+0.128030137 container died f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:11:32 np0005603541 systemd[1]: libpod-f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518.scope: Deactivated successfully.
Jan 31 02:11:32 np0005603541 systemd[1]: var-lib-containers-storage-overlay-2a0f6a648cd84378dcfda329187d3d15dc2688fdb650a4d4e06153c3d0e6c567-merged.mount: Deactivated successfully.
Jan 31 02:11:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 02:11:32 np0005603541 podman[250185]: 2026-01-31 07:11:32.223155029 +0000 UTC m=+0.172784366 container remove f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:11:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:32.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:32 np0005603541 systemd[1]: libpod-conmon-f56fe49ba38a4358434ecbbcfaa00844393b9b6aa6a341c9bb21e1cc46bdc518.scope: Deactivated successfully.
Jan 31 02:11:32 np0005603541 podman[250226]: 2026-01-31 07:11:32.344294038 +0000 UTC m=+0.038745024 container create 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:11:32 np0005603541 systemd[1]: Started libpod-conmon-401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b.scope.
Jan 31 02:11:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 02:11:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 02:11:32 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:32 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:32 np0005603541 podman[250226]: 2026-01-31 07:11:32.326621118 +0000 UTC m=+0.021072144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:32 np0005603541 podman[250226]: 2026-01-31 07:11:32.422354649 +0000 UTC m=+0.116805655 container init 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:11:32 np0005603541 podman[250226]: 2026-01-31 07:11:32.430075676 +0000 UTC m=+0.124526682 container start 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:11:32 np0005603541 podman[250226]: 2026-01-31 07:11:32.433354886 +0000 UTC m=+0.127805932 container attach 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:11:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Jan 31 02:11:33 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:33 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:33 np0005603541 quirky_mendeleev[250243]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:11:33 np0005603541 quirky_mendeleev[250243]: --> relative data size: 1.0
Jan 31 02:11:33 np0005603541 quirky_mendeleev[250243]: --> All data devices are unavailable
Jan 31 02:11:33 np0005603541 systemd[1]: libpod-401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b.scope: Deactivated successfully.
Jan 31 02:11:33 np0005603541 podman[250226]: 2026-01-31 07:11:33.158119658 +0000 UTC m=+0.852570674 container died 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:11:33 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ef6b1841406404c8f6698b250bff15257ec409493666f8deaf5c2fe24b2f3e97-merged.mount: Deactivated successfully.
Jan 31 02:11:33 np0005603541 podman[250226]: 2026-01-31 07:11:33.210170265 +0000 UTC m=+0.904621271 container remove 401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:11:33 np0005603541 systemd[1]: libpod-conmon-401333c4619a40ad77da965ed76afb73b16e47216ab38c46d018cb9ae8dfa33b.scope: Deactivated successfully.
Jan 31 02:11:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.74154374 +0000 UTC m=+0.045359556 container create 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:11:33 np0005603541 systemd[1]: Started libpod-conmon-3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201.scope.
Jan 31 02:11:33 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.719987915 +0000 UTC m=+0.023803711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.851430415 +0000 UTC m=+0.155246221 container init 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.857207445 +0000 UTC m=+0.161023221 container start 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:11:33 np0005603541 dreamy_shockley[250429]: 167 167
Jan 31 02:11:33 np0005603541 systemd[1]: libpod-3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201.scope: Deactivated successfully.
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.917636997 +0000 UTC m=+0.221452773 container attach 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.918103548 +0000 UTC m=+0.221919334 container died 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:11:33 np0005603541 systemd[1]: var-lib-containers-storage-overlay-41499b75c28f76f04f93b3eda529c071489662aeaf8e68574a94c33dea236f2e-merged.mount: Deactivated successfully.
Jan 31 02:11:33 np0005603541 podman[250413]: 2026-01-31 07:11:33.964294562 +0000 UTC m=+0.268110348 container remove 3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:11:33 np0005603541 systemd[1]: libpod-conmon-3aa40128d02a1119c53aab0e86855f8c42003f23d13d56baeb476241eafa9201.scope: Deactivated successfully.
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.121764475 +0000 UTC m=+0.051109035 container create 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:11:34 np0005603541 systemd[1]: Started libpod-conmon-2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc.scope.
Jan 31 02:11:34 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.101801139 +0000 UTC m=+0.031145739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f6b906d9a3349962211e86c169230e069cd547d8b43c2dc4a200efa73d017/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f6b906d9a3349962211e86c169230e069cd547d8b43c2dc4a200efa73d017/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f6b906d9a3349962211e86c169230e069cd547d8b43c2dc4a200efa73d017/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:34 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f6b906d9a3349962211e86c169230e069cd547d8b43c2dc4a200efa73d017/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.21769582 +0000 UTC m=+0.147040390 container init 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.22587822 +0000 UTC m=+0.155222810 container start 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:11:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.230141573 +0000 UTC m=+0.159486143 container attach 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:11:34 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:34 np0005603541 nova_compute[245601]: 2026-01-31 07:11:34.620 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 9.1 KiB/s rd, 1.0 MiB/s wr, 13 op/s
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]: {
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:    "0": [
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:        {
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "devices": [
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "/dev/loop3"
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            ],
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "lv_name": "ceph_lv0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "lv_size": "7511998464",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "name": "ceph_lv0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "tags": {
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.cluster_name": "ceph",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.crush_device_class": "",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.encrypted": "0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.osd_id": "0",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.type": "block",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:                "ceph.vdo": "0"
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            },
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "type": "block",
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:            "vg_name": "ceph_vg0"
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:        }
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]:    ]
Jan 31 02:11:34 np0005603541 reverent_gagarin[250471]: }
Jan 31 02:11:34 np0005603541 systemd[1]: libpod-2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc.scope: Deactivated successfully.
Jan 31 02:11:34 np0005603541 podman[250455]: 2026-01-31 07:11:34.96535531 +0000 UTC m=+0.894699870 container died 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 02:11:35 np0005603541 systemd[1]: var-lib-containers-storage-overlay-857f6b906d9a3349962211e86c169230e069cd547d8b43c2dc4a200efa73d017-merged.mount: Deactivated successfully.
Jan 31 02:11:35 np0005603541 podman[250455]: 2026-01-31 07:11:35.025379021 +0000 UTC m=+0.954723571 container remove 2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:11:35 np0005603541 systemd[1]: libpod-conmon-2899da29059f47f9f9990d9c8da0b6fc071f2d001c5c48cdced15980339025cc.scope: Deactivated successfully.
Jan 31 02:11:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:35.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:35 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.618831987 +0000 UTC m=+0.045042648 container create e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:11:35 np0005603541 nova_compute[245601]: 2026-01-31 07:11:35.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:35 np0005603541 nova_compute[245601]: 2026-01-31 07:11:35.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:11:35 np0005603541 systemd[1]: Started libpod-conmon-e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de.scope.
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.60131263 +0000 UTC m=+0.027523311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.713571442 +0000 UTC m=+0.139782103 container init e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.719954758 +0000 UTC m=+0.146165409 container start e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.723510864 +0000 UTC m=+0.149721525 container attach e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:11:35 np0005603541 zealous_ishizaka[250649]: 167 167
Jan 31 02:11:35 np0005603541 systemd[1]: libpod-e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de.scope: Deactivated successfully.
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.726247331 +0000 UTC m=+0.152457992 container died e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:11:35 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b9a6d779b5f2815bd15560c09d177107c212e345620cfab501d533de727a3250-merged.mount: Deactivated successfully.
Jan 31 02:11:35 np0005603541 podman[250632]: 2026-01-31 07:11:35.767821083 +0000 UTC m=+0.194031724 container remove e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:11:35 np0005603541 systemd[1]: libpod-conmon-e6df4c4b19c8bdf55ddc8fee742b0d1089a4cab7592816d40dc4191821e526de.scope: Deactivated successfully.
Jan 31 02:11:35 np0005603541 podman[250674]: 2026-01-31 07:11:35.927310796 +0000 UTC m=+0.066419459 container create ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:11:35 np0005603541 systemd[1]: Started libpod-conmon-ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3.scope.
Jan 31 02:11:35 np0005603541 podman[250674]: 2026-01-31 07:11:35.898859713 +0000 UTC m=+0.037968426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:11:35 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:11:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e439b63586c374198b709f34bdf71f5a5ebbe3639646d169eb4a86f29048fed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e439b63586c374198b709f34bdf71f5a5ebbe3639646d169eb4a86f29048fed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e439b63586c374198b709f34bdf71f5a5ebbe3639646d169eb4a86f29048fed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:36 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e439b63586c374198b709f34bdf71f5a5ebbe3639646d169eb4a86f29048fed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:11:36 np0005603541 podman[250674]: 2026-01-31 07:11:36.018979666 +0000 UTC m=+0.158088359 container init ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:11:36 np0005603541 podman[250674]: 2026-01-31 07:11:36.027318219 +0000 UTC m=+0.166426872 container start ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:11:36 np0005603541 podman[250674]: 2026-01-31 07:11:36.031386929 +0000 UTC m=+0.170495672 container attach ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:11:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:36 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.628 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.628 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.648 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.648 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.649 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.683 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.684 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.684 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.684 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:11:36 np0005603541 nova_compute[245601]: 2026-01-31 07:11:36.685 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]: {
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:        "osd_id": 0,
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:        "type": "bluestore"
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]:    }
Jan 31 02:11:36 np0005603541 xenodochial_thompson[250690]: }
Jan 31 02:11:36 np0005603541 systemd[1]: libpod-ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3.scope: Deactivated successfully.
Jan 31 02:11:36 np0005603541 podman[250674]: 2026-01-31 07:11:36.832683793 +0000 UTC m=+0.971792446 container died ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:11:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 895 KiB/s wr, 12 op/s
Jan 31 02:11:36 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e439b63586c374198b709f34bdf71f5a5ebbe3639646d169eb4a86f29048fed5-merged.mount: Deactivated successfully.
Jan 31 02:11:36 np0005603541 podman[250674]: 2026-01-31 07:11:36.894747105 +0000 UTC m=+1.033855758 container remove ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:11:36 np0005603541 systemd[1]: libpod-conmon-ff3232df910ac70ee77cb3598899269bdd972c99f503db67dc5bd6d785e434c3.scope: Deactivated successfully.
Jan 31 02:11:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:11:36 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:36 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev fbd1a58d-7b44-446c-bbb7-74b942de3f4e does not exist
Jan 31 02:11:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 706f78c5-5a1b-4f2e-ab5d-e26dd4baaa6e does not exist
Jan 31 02:11:37 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev c0ff07c9-5e76-4196-ac22-42a2d4e4d0b5 does not exist
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046354474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.135 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1239 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.270 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.271 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.271 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.271 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:11:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.333 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.333 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.346 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1239 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:11:37 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663682457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.791 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.797 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.825 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.827 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:11:37 np0005603541 nova_compute[245601]: 2026-01-31 07:11:37.827 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:11:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:38 np0005603541 podman[250845]: 2026-01-31 07:11:38.256654317 +0000 UTC m=+0.060733780 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 02:11:38 np0005603541 podman[250844]: 2026-01-31 07:11:38.295383439 +0000 UTC m=+0.102959887 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:11:38 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:38 np0005603541 nova_compute[245601]: 2026-01-31 07:11:38.804 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:38 np0005603541 nova_compute[245601]: 2026-01-31 07:11:38.804 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:38 np0005603541 nova_compute[245601]: 2026-01-31 07:11:38.804 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 614 B/s wr, 3 op/s
Jan 31 02:11:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:39.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:39 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:39 np0005603541 nova_compute[245601]: 2026-01-31 07:11:39.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:40 np0005603541 nova_compute[245601]: 2026-01-31 07:11:40.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:11:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:40 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 2 op/s
Jan 31 02:11:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:41 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1244 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:42 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:42 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1244 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 392 B/s wr, 2 op/s
Jan 31 02:11:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:43 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:44.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:44 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 341 B/s wr, 2 op/s
Jan 31 02:11:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:45.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:45 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:46.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:46 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1249 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:47 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:47 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1249 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:48.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:11:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:48 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:11:49
Jan 31 02:11:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:11:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:11:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'backups', '.rgw.root', 'images']
Jan 31 02:11:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:11:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:50 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:51 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:51.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:52 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1254 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:52.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:53 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:53 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1254 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:54 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:11:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:55 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:56 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1259 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:57 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:57.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:58 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:58 np0005603541 ceph-mon[74355]: Health check update: 1 slow ops, oldest one blocked for 1259 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:11:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:58.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:11:59 np0005603541 ceph-mon[74355]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 02:11:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:11:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:59.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:00.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:01.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:01 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1264 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:02.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:02 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1264 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:04.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:04 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:05 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:06 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1269 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.237733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527237761, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1335, "num_deletes": 258, "total_data_size": 1722332, "memory_usage": 1755984, "flush_reason": "Manual Compaction"}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527248667, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1694251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22704, "largest_seqno": 24038, "table_properties": {"data_size": 1688438, "index_size": 2889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14797, "raw_average_key_size": 20, "raw_value_size": 1675432, "raw_average_value_size": 2295, "num_data_blocks": 127, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843437, "oldest_key_time": 1769843437, "file_creation_time": 1769843527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 10982 microseconds, and 3661 cpu microseconds.
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.248714) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1694251 bytes OK
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.248733) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.250462) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.250476) EVENT_LOG_v1 {"time_micros": 1769843527250472, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.250493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1716245, prev total WAL file size 1716245, number of live WAL files 2.
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.251020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353035' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1654KB)], [50(8461KB)]
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527251139, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10358923, "oldest_snapshot_seqno": -1}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 6049 keys, 10199419 bytes, temperature: kUnknown
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527329324, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 10199419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10159919, "index_size": 23262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 158066, "raw_average_key_size": 26, "raw_value_size": 10050595, "raw_average_value_size": 1661, "num_data_blocks": 929, "num_entries": 6049, "num_filter_entries": 6049, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.329562) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 10199419 bytes
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.330821) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6582, records dropped: 533 output_compression: NoCompression
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.330840) EVENT_LOG_v1 {"time_micros": 1769843527330831, "job": 26, "event": "compaction_finished", "compaction_time_micros": 78229, "compaction_time_cpu_micros": 18457, "output_level": 6, "num_output_files": 1, "total_output_size": 10199419, "num_input_records": 6582, "num_output_records": 6049, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527331111, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527332241, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.250819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.332272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.332277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.332279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.332281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:12:07.332284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:12:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:07.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:07 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1269 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:09 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:09 np0005603541 podman[250978]: 2026-01-31 07:12:09.024285348 +0000 UTC m=+0.063123011 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 02:12:09 np0005603541 podman[250979]: 2026-01-31 07:12:09.05987295 +0000 UTC m=+0.095627398 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:12:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:10 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003727767890377815 of space, bias 1.0, pg target 0.11183303671133446 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:12:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:11 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:11.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:12 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1274 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:13 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:13 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1274 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:13.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:14 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:14.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:15 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:15.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:16 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1279 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:18 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:18 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1279 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:19.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:19 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:12:20.141 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:12:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:12:20.142 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:12:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:12:20.142 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:12:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:20 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:21.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:21 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1284 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:22 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1284 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:23 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:24.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:24 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:25 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:27 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1289 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:28.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:28 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:28 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1289 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:29 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:30.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:30 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:31.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:31 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1294 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:32.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:32 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1294 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:33 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.649 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.650 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.650 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:12:34 np0005603541 nova_compute[245601]: 2026-01-31 07:12:34.668 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.684 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.685 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.685 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.701 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.701 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:36 np0005603541 nova_compute[245601]: 2026-01-31 07:12:36.701 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:12:36 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1299 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.647 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.647 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.647 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.647 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:12:37 np0005603541 nova_compute[245601]: 2026-01-31 07:12:37.648 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:12:37 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:37 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1299 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 3874c099-ee58-4ea3-a2a1-a190950f56d6 does not exist
Jan 31 02:12:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 306ad08b-e0e3-4880-af6c-94e3c47f9114 does not exist
Jan 31 02:12:38 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 542100d0-b81e-4385-b48c-3a01ad189639 does not exist
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301459868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.090 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.232 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.233 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5217MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.233 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.233 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:12:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.494 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.495 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.546381277 +0000 UTC m=+0.040291747 container create 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:12:38 np0005603541 systemd[1]: Started libpod-conmon-4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487.scope.
Jan 31 02:12:38 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.525931382 +0000 UTC m=+0.019841882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.622 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Refreshing inventories for resource provider 7666a20e-f730-4016-ad1a-a5df3a106dcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.623806403 +0000 UTC m=+0.117716893 container init 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.63275426 +0000 UTC m=+0.126664730 container start 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:12:38 np0005603541 trusting_galois[251423]: 167 167
Jan 31 02:12:38 np0005603541 systemd[1]: libpod-4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487.scope: Deactivated successfully.
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.660275847 +0000 UTC m=+0.154186407 container attach 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.661917416 +0000 UTC m=+0.155827896 container died 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:12:38 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7fd70c8e379dfb7d7a4d6004460026d0c5ff691199d82af36e2b7ddc5975e66a-merged.mount: Deactivated successfully.
Jan 31 02:12:38 np0005603541 podman[251383]: 2026-01-31 07:12:38.758875385 +0000 UTC m=+0.252785865 container remove 4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:12:38 np0005603541 systemd[1]: libpod-conmon-4b87f6efb76e2ab5982d5a3c971ae0c34a92288102ccfdd8098777284b540487.scope: Deactivated successfully.
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.781 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Updating ProviderTree inventory for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.782 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Updating inventory in ProviderTree for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.822 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Refreshing aggregate associations for resource provider 7666a20e-f730-4016-ad1a-a5df3a106dcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:38 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.850 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Refreshing trait associations for resource provider 7666a20e-f730-4016-ad1a-a5df3a106dcd, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:12:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:38 np0005603541 nova_compute[245601]: 2026-01-31 07:12:38.868 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:12:38 np0005603541 podman[251475]: 2026-01-31 07:12:38.903386116 +0000 UTC m=+0.048699511 container create 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:12:38 np0005603541 systemd[1]: Started libpod-conmon-59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6.scope.
Jan 31 02:12:38 np0005603541 podman[251475]: 2026-01-31 07:12:38.877811597 +0000 UTC m=+0.023125012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:38 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:38 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:39 np0005603541 podman[251475]: 2026-01-31 07:12:39.025048004 +0000 UTC m=+0.170361419 container init 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:12:39 np0005603541 podman[251475]: 2026-01-31 07:12:39.031615463 +0000 UTC m=+0.176928858 container start 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:12:39 np0005603541 podman[251475]: 2026-01-31 07:12:39.03645514 +0000 UTC m=+0.181768545 container attach 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:12:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:12:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4199279679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:12:39 np0005603541 nova_compute[245601]: 2026-01-31 07:12:39.313 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:12:39 np0005603541 nova_compute[245601]: 2026-01-31 07:12:39.319 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:12:39 np0005603541 nova_compute[245601]: 2026-01-31 07:12:39.335 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:12:39 np0005603541 nova_compute[245601]: 2026-01-31 07:12:39.337 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:12:39 np0005603541 nova_compute[245601]: 2026-01-31 07:12:39.337 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:12:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:39 np0005603541 peaceful_mccarthy[251493]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:12:39 np0005603541 peaceful_mccarthy[251493]: --> relative data size: 1.0
Jan 31 02:12:39 np0005603541 peaceful_mccarthy[251493]: --> All data devices are unavailable
Jan 31 02:12:39 np0005603541 systemd[1]: libpod-59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6.scope: Deactivated successfully.
Jan 31 02:12:39 np0005603541 podman[251475]: 2026-01-31 07:12:39.792295151 +0000 UTC m=+0.937608546 container died 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:12:39 np0005603541 systemd[1]: var-lib-containers-storage-overlay-11f8b921226b1835a48443bee86bee3f9623f67de920bebabe63dfe88674e835-merged.mount: Deactivated successfully.
Jan 31 02:12:39 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:39 np0005603541 podman[251475]: 2026-01-31 07:12:39.875151738 +0000 UTC m=+1.020465133 container remove 59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:12:39 np0005603541 systemd[1]: libpod-conmon-59c3516507f9cd8c68af117cb1923602cffd3fa286f61ba35bd845c9686ea9a6.scope: Deactivated successfully.
Jan 31 02:12:39 np0005603541 podman[251542]: 2026-01-31 07:12:39.90947027 +0000 UTC m=+0.086602529 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:12:39 np0005603541 podman[251531]: 2026-01-31 07:12:39.934334503 +0000 UTC m=+0.111456091 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:12:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:40.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:40 np0005603541 nova_compute[245601]: 2026-01-31 07:12:40.337 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:40 np0005603541 nova_compute[245601]: 2026-01-31 07:12:40.337 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:40 np0005603541 nova_compute[245601]: 2026-01-31 07:12:40.338 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:40 np0005603541 nova_compute[245601]: 2026-01-31 07:12:40.338 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:40 np0005603541 nova_compute[245601]: 2026-01-31 07:12:40.338 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.45141243 +0000 UTC m=+0.035696416 container create e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:12:40 np0005603541 systemd[1]: Started libpod-conmon-e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45.scope.
Jan 31 02:12:40 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.529152513 +0000 UTC m=+0.113436589 container init e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.433282911 +0000 UTC m=+0.017566947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.534158964 +0000 UTC m=+0.118442990 container start e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.537846704 +0000 UTC m=+0.122130730 container attach e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:12:40 np0005603541 lucid_swirles[251741]: 167 167
Jan 31 02:12:40 np0005603541 systemd[1]: libpod-e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45.scope: Deactivated successfully.
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.539664958 +0000 UTC m=+0.123948954 container died e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:12:40 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e31db469da0ca3ed7486393c8ff74a58e20faffc4ac5177c3b815d5b4cf4cbe0-merged.mount: Deactivated successfully.
Jan 31 02:12:40 np0005603541 podman[251725]: 2026-01-31 07:12:40.577455653 +0000 UTC m=+0.161739639 container remove e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:12:40 np0005603541 systemd[1]: libpod-conmon-e6d3e7a6c62e278e76fb9087aaadb79717e8c0851d9ce1f2119ef63d0b078e45.scope: Deactivated successfully.
Jan 31 02:12:40 np0005603541 podman[251765]: 2026-01-31 07:12:40.69743449 +0000 UTC m=+0.045420071 container create 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:12:40 np0005603541 systemd[1]: Started libpod-conmon-98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee.scope.
Jan 31 02:12:40 np0005603541 podman[251765]: 2026-01-31 07:12:40.674143076 +0000 UTC m=+0.022128717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:40 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3a3db4e51504ec777e91a9becea67b9f39284b011dd577f6e82057b4d1e57c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3a3db4e51504ec777e91a9becea67b9f39284b011dd577f6e82057b4d1e57c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3a3db4e51504ec777e91a9becea67b9f39284b011dd577f6e82057b4d1e57c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:40 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3a3db4e51504ec777e91a9becea67b9f39284b011dd577f6e82057b4d1e57c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:40 np0005603541 podman[251765]: 2026-01-31 07:12:40.814948348 +0000 UTC m=+0.162933919 container init 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:12:40 np0005603541 podman[251765]: 2026-01-31 07:12:40.821906625 +0000 UTC m=+0.169892186 container start 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:12:40 np0005603541 podman[251765]: 2026-01-31 07:12:40.825783299 +0000 UTC m=+0.173768880 container attach 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:12:40 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:41 np0005603541 tender_golick[251782]: {
Jan 31 02:12:41 np0005603541 tender_golick[251782]:    "0": [
Jan 31 02:12:41 np0005603541 tender_golick[251782]:        {
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "devices": [
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "/dev/loop3"
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            ],
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "lv_name": "ceph_lv0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "lv_size": "7511998464",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "name": "ceph_lv0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "tags": {
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.cluster_name": "ceph",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.crush_device_class": "",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.encrypted": "0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.osd_id": "0",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.type": "block",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:                "ceph.vdo": "0"
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            },
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "type": "block",
Jan 31 02:12:41 np0005603541 tender_golick[251782]:            "vg_name": "ceph_vg0"
Jan 31 02:12:41 np0005603541 tender_golick[251782]:        }
Jan 31 02:12:41 np0005603541 tender_golick[251782]:    ]
Jan 31 02:12:41 np0005603541 tender_golick[251782]: }
Jan 31 02:12:41 np0005603541 systemd[1]: libpod-98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee.scope: Deactivated successfully.
Jan 31 02:12:41 np0005603541 podman[251765]: 2026-01-31 07:12:41.578543897 +0000 UTC m=+0.926529478 container died 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:12:41 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1c3a3db4e51504ec777e91a9becea67b9f39284b011dd577f6e82057b4d1e57c-merged.mount: Deactivated successfully.
Jan 31 02:12:41 np0005603541 podman[251765]: 2026-01-31 07:12:41.6381503 +0000 UTC m=+0.986135861 container remove 98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:12:41 np0005603541 systemd[1]: libpod-conmon-98d1f3fce632b80644f31ae43580b85ff834836720d99da8fdb2da1f353ce9ee.scope: Deactivated successfully.
Jan 31 02:12:41 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.148245339 +0000 UTC m=+0.038369991 container create bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:12:42 np0005603541 systemd[1]: Started libpod-conmon-bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103.scope.
Jan 31 02:12:42 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.203587479 +0000 UTC m=+0.093712151 container init bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.211543592 +0000 UTC m=+0.101668244 container start bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.215926768 +0000 UTC m=+0.106051410 container attach bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:12:42 np0005603541 busy_williams[251962]: 167 167
Jan 31 02:12:42 np0005603541 systemd[1]: libpod-bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103.scope: Deactivated successfully.
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.217728161 +0000 UTC m=+0.107852813 container died bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.131685497 +0000 UTC m=+0.021810189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1304 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:42 np0005603541 systemd[1]: var-lib-containers-storage-overlay-2854d820586847f6a57c24059132ed859ce4437efbbb402b75f574a71f9ef7f3-merged.mount: Deactivated successfully.
Jan 31 02:12:42 np0005603541 podman[251946]: 2026-01-31 07:12:42.252495464 +0000 UTC m=+0.142620106 container remove bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:12:42 np0005603541 systemd[1]: libpod-conmon-bdd90e957b31ec3beab7ce5b09bc17aa49717f7c13356ad1929bcd6195610103.scope: Deactivated successfully.
Jan 31 02:12:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:42 np0005603541 podman[251984]: 2026-01-31 07:12:42.362502179 +0000 UTC m=+0.036924246 container create 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:12:42 np0005603541 systemd[1]: Started libpod-conmon-2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53.scope.
Jan 31 02:12:42 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:12:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8c0333422955a30272884dbf08b31a2ed955848bc3b4f3d8fc6222ec7af73a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8c0333422955a30272884dbf08b31a2ed955848bc3b4f3d8fc6222ec7af73a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8c0333422955a30272884dbf08b31a2ed955848bc3b4f3d8fc6222ec7af73a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:42 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8c0333422955a30272884dbf08b31a2ed955848bc3b4f3d8fc6222ec7af73a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:12:42 np0005603541 podman[251984]: 2026-01-31 07:12:42.344933523 +0000 UTC m=+0.019355610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:12:42 np0005603541 podman[251984]: 2026-01-31 07:12:42.443993394 +0000 UTC m=+0.118415561 container init 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:12:42 np0005603541 podman[251984]: 2026-01-31 07:12:42.452567831 +0000 UTC m=+0.126989908 container start 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:12:42 np0005603541 podman[251984]: 2026-01-31 07:12:42.455949874 +0000 UTC m=+0.130372021 container attach 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:12:42 np0005603541 nova_compute[245601]: 2026-01-31 07:12:42.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:12:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:42 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1304 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]: {
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:        "osd_id": 0,
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:        "type": "bluestore"
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]:    }
Jan 31 02:12:43 np0005603541 vigorous_burnell[252000]: }
Jan 31 02:12:43 np0005603541 systemd[1]: libpod-2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53.scope: Deactivated successfully.
Jan 31 02:12:43 np0005603541 podman[251984]: 2026-01-31 07:12:43.310427215 +0000 UTC m=+0.984849292 container died 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:12:43 np0005603541 systemd[1]: var-lib-containers-storage-overlay-2b8c0333422955a30272884dbf08b31a2ed955848bc3b4f3d8fc6222ec7af73a-merged.mount: Deactivated successfully.
Jan 31 02:12:43 np0005603541 podman[251984]: 2026-01-31 07:12:43.366122934 +0000 UTC m=+1.040545001 container remove 2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:12:43 np0005603541 systemd[1]: libpod-conmon-2073b79cc938b6d473be1da7b4a4c1f1f8cbb28fb9089ab8bcb00e6e814f7e53.scope: Deactivated successfully.
Jan 31 02:12:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:12:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:43 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:12:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:43 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:43 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d914345a-c3d1-4085-898b-f180db889cd5 does not exist
Jan 31 02:12:43 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 571bf8dc-b5a5-4918-914d-f19783a9541f does not exist
Jan 31 02:12:43 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 45379726-7b4a-4567-859b-1a7d95010337 does not exist
Jan 31 02:12:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:44 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:44 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:44 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:12:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:45 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1309 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:47 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:47 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1309 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:12:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:12:49
Jan 31 02:12:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:12:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:12:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'images']
Jan 31 02:12:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:12:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:49 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:50 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:52 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1314 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:53 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:53 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1314 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:53.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:54 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:12:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:55.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:55 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1319 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:57.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:58 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1319 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:12:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:58.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:12:59 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:12:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:12:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:59.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:13:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1748277647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:13:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:13:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1748277647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:13:01 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1324 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:02 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1324 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:04 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:05.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:06 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1329 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:07 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:07 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1329 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:08.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:09.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:09 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:10 np0005603541 podman[252150]: 2026-01-31 07:13:10.080750053 +0000 UTC m=+0.110820107 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 02:13:10 np0005603541 podman[252151]: 2026-01-31 07:13:10.090066458 +0000 UTC m=+0.119879465 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003727767890377815 of space, bias 1.0, pg target 0.11183303671133446 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:13:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:10.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:10 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:11 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1334 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:12 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:12 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1334 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:14 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:14.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:15 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:16.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:16 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:17 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:18 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:19.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:19 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:13:20.142 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:13:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:13:20.143 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:13:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:13:20.143 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:13:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:20 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:21.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:21 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:22 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:23 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:24 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:25.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.105586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606105676, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 251, "total_data_size": 1457973, "memory_usage": 1481456, "flush_reason": "Manual Compaction"}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606117926, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1433942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24039, "largest_seqno": 25191, "table_properties": {"data_size": 1428804, "index_size": 2406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13412, "raw_average_key_size": 20, "raw_value_size": 1417507, "raw_average_value_size": 2194, "num_data_blocks": 106, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843527, "oldest_key_time": 1769843527, "file_creation_time": 1769843606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12328 microseconds, and 3106 cpu microseconds.
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.117966) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1433942 bytes OK
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.117981) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.121493) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.121526) EVENT_LOG_v1 {"time_micros": 1769843606121518, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.121547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1452603, prev total WAL file size 1452603, number of live WAL files 2.
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.122339) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1400KB)], [53(9960KB)]
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606122367, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11633361, "oldest_snapshot_seqno": -1}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 6180 keys, 9990141 bytes, temperature: kUnknown
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606183731, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9990141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9950161, "index_size": 23422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 162016, "raw_average_key_size": 26, "raw_value_size": 9838722, "raw_average_value_size": 1592, "num_data_blocks": 934, "num_entries": 6180, "num_filter_entries": 6180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.184009) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9990141 bytes
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.185654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.7 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(15.1) write-amplify(7.0) OK, records in: 6695, records dropped: 515 output_compression: NoCompression
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.185684) EVENT_LOG_v1 {"time_micros": 1769843606185670, "job": 28, "event": "compaction_finished", "compaction_time_micros": 61445, "compaction_time_cpu_micros": 19884, "output_level": 6, "num_output_files": 1, "total_output_size": 9990141, "num_input_records": 6695, "num_output_records": 6180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606186050, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606187957, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.122255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.188049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.188054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.188056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.188058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:13:26.188060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:27 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:28 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:28 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:29 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:29.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:30 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:31 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:32.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:33 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:33 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:34.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:34 np0005603541 nova_compute[245601]: 2026-01-31 07:13:34.622 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:36.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:36 np0005603541 nova_compute[245601]: 2026-01-31 07:13:36.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:36 np0005603541 nova_compute[245601]: 2026-01-31 07:13:36.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:13:36 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:37.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:37 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:37 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:38 np0005603541 nova_compute[245601]: 2026-01-31 07:13:38.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:38 np0005603541 nova_compute[245601]: 2026-01-31 07:13:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:13:38 np0005603541 nova_compute[245601]: 2026-01-31 07:13:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:13:38 np0005603541 nova_compute[245601]: 2026-01-31 07:13:38.644 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:13:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:39.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.652 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.652 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.652 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.653 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:13:39 np0005603541 nova_compute[245601]: 2026-01-31 07:13:39.653 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:13:39 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:13:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164076236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.078 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.207 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.209 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5249MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.209 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.209 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.301 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.301 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.327 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:13:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:40.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:40 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:13:40 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231318574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.736 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.744 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.760 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.762 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:13:40 np0005603541 nova_compute[245601]: 2026-01-31 07:13:40.762 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:13:40 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:41 np0005603541 podman[252356]: 2026-01-31 07:13:41.057253073 +0000 UTC m=+0.092124413 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:13:41 np0005603541 podman[252355]: 2026-01-31 07:13:41.067940512 +0000 UTC m=+0.109422922 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 02:13:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:41.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:41 np0005603541 nova_compute[245601]: 2026-01-31 07:13:41.758 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:41 np0005603541 nova_compute[245601]: 2026-01-31 07:13:41.759 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:41 np0005603541 nova_compute[245601]: 2026-01-31 07:13:41.759 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:41 np0005603541 nova_compute[245601]: 2026-01-31 07:13:41.759 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:43 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:43 np0005603541 nova_compute[245601]: 2026-01-31 07:13:43.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:13:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:13:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:44.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 367fae9a-6b35-4f42-bf64-824132246003 does not exist
Jan 31 02:13:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 71ddfe61-395c-4a13-ba0d-06c4cd796c4a does not exist
Jan 31 02:13:44 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 57ede409-83b1-4431-8ea9-ada4478fee33 does not exist
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:13:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:13:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.021190536 +0000 UTC m=+0.019948145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.13527828 +0000 UTC m=+0.134035869 container create a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:13:45 np0005603541 systemd[1]: Started libpod-conmon-a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339.scope.
Jan 31 02:13:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.254933518 +0000 UTC m=+0.253691147 container init a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.261032437 +0000 UTC m=+0.259790036 container start a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:13:45 np0005603541 bold_rubin[252688]: 167 167
Jan 31 02:13:45 np0005603541 systemd[1]: libpod-a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339.scope: Deactivated successfully.
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.272085465 +0000 UTC m=+0.270843094 container attach a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.273682843 +0000 UTC m=+0.272440482 container died a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:13:45 np0005603541 systemd[1]: var-lib-containers-storage-overlay-201499df332fa3f94330b00665c306459dbc5c4da75315775b63e0e8ab7fa279-merged.mount: Deactivated successfully.
Jan 31 02:13:45 np0005603541 podman[252671]: 2026-01-31 07:13:45.412051396 +0000 UTC m=+0.410809005 container remove a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:13:45 np0005603541 systemd[1]: libpod-conmon-a60f6c5206874a679b76c30e5e540de7e6a0382cd1b3cd0320d87a5f8e773339.scope: Deactivated successfully.
Jan 31 02:13:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:45.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:45 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:45 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:13:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:45 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:13:45 np0005603541 podman[252714]: 2026-01-31 07:13:45.529377257 +0000 UTC m=+0.025215172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:45 np0005603541 podman[252714]: 2026-01-31 07:13:45.634092704 +0000 UTC m=+0.129930599 container create 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:13:45 np0005603541 systemd[1]: Started libpod-conmon-4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385.scope.
Jan 31 02:13:45 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:45 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:45 np0005603541 podman[252714]: 2026-01-31 07:13:45.86875582 +0000 UTC m=+0.364593745 container init 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:13:45 np0005603541 podman[252714]: 2026-01-31 07:13:45.873673149 +0000 UTC m=+0.369511044 container start 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:13:45 np0005603541 podman[252714]: 2026-01-31 07:13:45.916008025 +0000 UTC m=+0.411845910 container attach 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:13:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:46.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:46 np0005603541 cranky_gauss[252732]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:13:46 np0005603541 cranky_gauss[252732]: --> relative data size: 1.0
Jan 31 02:13:46 np0005603541 cranky_gauss[252732]: --> All data devices are unavailable
Jan 31 02:13:46 np0005603541 systemd[1]: libpod-4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385.scope: Deactivated successfully.
Jan 31 02:13:46 np0005603541 podman[252714]: 2026-01-31 07:13:46.615672785 +0000 UTC m=+1.111510720 container died 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:13:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:46 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:46 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ea32cf26fb8cf2cd5f184caca3fc678de69a38370b0d0e68779b24d8a6a2b3f6-merged.mount: Deactivated successfully.
Jan 31 02:13:46 np0005603541 podman[252714]: 2026-01-31 07:13:46.768008705 +0000 UTC m=+1.263846600 container remove 4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gauss, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 02:13:46 np0005603541 systemd[1]: libpod-conmon-4aeb0d29447307177c70a208ffaf34329af6b25bbc82d4bf50d5b6289268c385.scope: Deactivated successfully.
Jan 31 02:13:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.357498617 +0000 UTC m=+0.074445375 container create 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.304988845 +0000 UTC m=+0.021935643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:47 np0005603541 systemd[1]: Started libpod-conmon-2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643.scope.
Jan 31 02:13:47 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:47.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.580877489 +0000 UTC m=+0.297824337 container init 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.587666014 +0000 UTC m=+0.304612782 container start 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:13:47 np0005603541 systemd[1]: libpod-2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643.scope: Deactivated successfully.
Jan 31 02:13:47 np0005603541 nifty_lamarr[252916]: 167 167
Jan 31 02:13:47 np0005603541 conmon[252916]: conmon 2b5c2ac044195bc11a07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643.scope/container/memory.events
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.59413193 +0000 UTC m=+0.311078728 container attach 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.594513799 +0000 UTC m=+0.311460577 container died 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:13:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:47 np0005603541 systemd[1]: var-lib-containers-storage-overlay-d603ecba8e7f55c2c29d3fe0155e2923d1deb317ee078a1331ede3a69b97c6a1-merged.mount: Deactivated successfully.
Jan 31 02:13:47 np0005603541 podman[252900]: 2026-01-31 07:13:47.637708855 +0000 UTC m=+0.354655623 container remove 2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:13:47 np0005603541 systemd[1]: libpod-conmon-2b5c2ac044195bc11a070a90d803a23947e70c260380fdb754d3c5f35934c643.scope: Deactivated successfully.
Jan 31 02:13:47 np0005603541 podman[252939]: 2026-01-31 07:13:47.805547122 +0000 UTC m=+0.051546150 container create 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:13:47 np0005603541 systemd[1]: Started libpod-conmon-47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e.scope.
Jan 31 02:13:47 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:47 np0005603541 podman[252939]: 2026-01-31 07:13:47.780161647 +0000 UTC m=+0.026160685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f981de53d28eb05a8668627fd2a0c4dbd9d5a94fb2c13884a9eeabdcf094e6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f981de53d28eb05a8668627fd2a0c4dbd9d5a94fb2c13884a9eeabdcf094e6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f981de53d28eb05a8668627fd2a0c4dbd9d5a94fb2c13884a9eeabdcf094e6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:47 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f981de53d28eb05a8668627fd2a0c4dbd9d5a94fb2c13884a9eeabdcf094e6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:47 np0005603541 podman[252939]: 2026-01-31 07:13:47.900333328 +0000 UTC m=+0.146332356 container init 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:13:47 np0005603541 podman[252939]: 2026-01-31 07:13:47.909547641 +0000 UTC m=+0.155546659 container start 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:13:47 np0005603541 podman[252939]: 2026-01-31 07:13:47.915102596 +0000 UTC m=+0.161101614 container attach 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:13:47 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:48.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]: {
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:    "0": [
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:        {
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "devices": [
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "/dev/loop3"
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            ],
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "lv_name": "ceph_lv0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "lv_size": "7511998464",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "name": "ceph_lv0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "tags": {
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.cluster_name": "ceph",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.crush_device_class": "",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.encrypted": "0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.osd_id": "0",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.type": "block",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:                "ceph.vdo": "0"
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            },
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "type": "block",
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:            "vg_name": "ceph_vg0"
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:        }
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]:    ]
Jan 31 02:13:48 np0005603541 festive_wilbur[252956]: }
Jan 31 02:13:48 np0005603541 systemd[1]: libpod-47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e.scope: Deactivated successfully.
Jan 31 02:13:48 np0005603541 podman[252939]: 2026-01-31 07:13:48.670964229 +0000 UTC m=+0.916963287 container died 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:13:48 np0005603541 systemd[1]: var-lib-containers-storage-overlay-4f981de53d28eb05a8668627fd2a0c4dbd9d5a94fb2c13884a9eeabdcf094e6b-merged.mount: Deactivated successfully.
Jan 31 02:13:48 np0005603541 podman[252939]: 2026-01-31 07:13:48.756903221 +0000 UTC m=+1.002902239 container remove 47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:13:48 np0005603541 systemd[1]: libpod-conmon-47e5f221412038f43d7c482e5ad81ed6d951d9eec39b75946f59c562de2d9e5e.scope: Deactivated successfully.
Jan 31 02:13:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:13:49
Jan 31 02:13:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:13:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:13:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'vms', 'default.rgw.meta']
Jan 31 02:13:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.246108543 +0000 UTC m=+0.036138227 container create d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:13:49 np0005603541 systemd[1]: Started libpod-conmon-d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f.scope.
Jan 31 02:13:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.315083964 +0000 UTC m=+0.105113648 container init d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.322044392 +0000 UTC m=+0.112074076 container start d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.22782832 +0000 UTC m=+0.017858034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:49 np0005603541 jovial_borg[253134]: 167 167
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.325784673 +0000 UTC m=+0.115814367 container attach d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:13:49 np0005603541 systemd[1]: libpod-d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f.scope: Deactivated successfully.
Jan 31 02:13:49 np0005603541 conmon[253134]: conmon d500a83d3df1efb5a902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f.scope/container/memory.events
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.32728146 +0000 UTC m=+0.117311144 container died d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:13:49 np0005603541 systemd[1]: var-lib-containers-storage-overlay-ae767126b31e01674e16ab1f04bf63d9be35ae2d2aeb73102cc9d97839a2bb59-merged.mount: Deactivated successfully.
Jan 31 02:13:49 np0005603541 podman[253118]: 2026-01-31 07:13:49.359435549 +0000 UTC m=+0.149465233 container remove d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:13:49 np0005603541 systemd[1]: libpod-conmon-d500a83d3df1efb5a902235dbb7c849c4ace1981d12550dfff57c3e2dc42166f.scope: Deactivated successfully.
Jan 31 02:13:49 np0005603541 podman[253159]: 2026-01-31 07:13:49.471309649 +0000 UTC m=+0.036162017 container create 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:13:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:49 np0005603541 systemd[1]: Started libpod-conmon-1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07.scope.
Jan 31 02:13:49 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:13:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b911e1cf0d468e24fd433a2031de4e37a8b77f07c33995b1aac9be75d99d464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b911e1cf0d468e24fd433a2031de4e37a8b77f07c33995b1aac9be75d99d464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b911e1cf0d468e24fd433a2031de4e37a8b77f07c33995b1aac9be75d99d464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:49 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b911e1cf0d468e24fd433a2031de4e37a8b77f07c33995b1aac9be75d99d464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:13:49 np0005603541 podman[253159]: 2026-01-31 07:13:49.543884497 +0000 UTC m=+0.108736895 container init 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:13:49 np0005603541 podman[253159]: 2026-01-31 07:13:49.549338649 +0000 UTC m=+0.114191017 container start 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:13:49 np0005603541 podman[253159]: 2026-01-31 07:13:49.457465993 +0000 UTC m=+0.022318361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:13:49 np0005603541 podman[253159]: 2026-01-31 07:13:49.555812116 +0000 UTC m=+0.120664484 container attach 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:13:50 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]: {
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:        "osd_id": 0,
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:        "type": "bluestore"
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]:    }
Jan 31 02:13:50 np0005603541 wizardly_wright[253177]: }
Jan 31 02:13:50 np0005603541 systemd[1]: libpod-1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07.scope: Deactivated successfully.
Jan 31 02:13:50 np0005603541 podman[253159]: 2026-01-31 07:13:50.331932769 +0000 UTC m=+0.896785137 container died 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:13:50 np0005603541 systemd[1]: var-lib-containers-storage-overlay-1b911e1cf0d468e24fd433a2031de4e37a8b77f07c33995b1aac9be75d99d464-merged.mount: Deactivated successfully.
Jan 31 02:13:50 np0005603541 podman[253159]: 2026-01-31 07:13:50.390914218 +0000 UTC m=+0.955766586 container remove 1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:13:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:50.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:50 np0005603541 systemd[1]: libpod-conmon-1cb805d4c549625ecac5c8c26aa06fd808c8e6b50e83b330994cfe59a8810e07.scope: Deactivated successfully.
Jan 31 02:13:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:13:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:13:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 36b3b71d-4f62-428a-822f-4d1fdd84b329 does not exist
Jan 31 02:13:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev e2c80629-4331-420c-9610-2e2eb4175af3 does not exist
Jan 31 02:13:50 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 5b1ae2a1-4ebf-4c24-b1bd-c14ac273eae7 does not exist
Jan 31 02:13:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:51 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:51 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:13:51 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:52.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:52 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:52 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:53 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:54 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:13:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:57 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:57.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:58 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:13:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:58.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:13:59 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:13:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:13:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:59.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:01 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:01.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:03 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:03.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:04.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:04 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:05.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:05 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:06 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:07 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:08 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:09.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:09 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003727767890377815 of space, bias 1.0, pg target 0.11183303671133446 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:14:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:10 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:11 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:12 np0005603541 podman[253325]: 2026-01-31 07:14:12.024446847 +0000 UTC m=+0.057727087 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:14:12 np0005603541 podman[253324]: 2026-01-31 07:14:12.050206285 +0000 UTC m=+0.083439304 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 02:14:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:12.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:12 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:12 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:13 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:14.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:14 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:15 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:16.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:16 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:17 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:18.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:18 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:19 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:14:20.143 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:14:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:14:20.143 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:14:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:14:20.143 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:14:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:20 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:21 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.674658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662674738, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 907, "num_deletes": 250, "total_data_size": 1064296, "memory_usage": 1086232, "flush_reason": "Manual Compaction"}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662689899, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 706955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25192, "largest_seqno": 26098, "table_properties": {"data_size": 703262, "index_size": 1281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10910, "raw_average_key_size": 21, "raw_value_size": 694861, "raw_average_value_size": 1338, "num_data_blocks": 55, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843607, "oldest_key_time": 1769843607, "file_creation_time": 1769843662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 15304 microseconds, and 3320 cpu microseconds.
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.689974) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 706955 bytes OK
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.689996) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.695823) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.695842) EVENT_LOG_v1 {"time_micros": 1769843662695835, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.695863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1059833, prev total WAL file size 1059833, number of live WAL files 2.
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.696867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(690KB)], [56(9755KB)]
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662696950, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 10697096, "oldest_snapshot_seqno": -1}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 6208 keys, 7151644 bytes, temperature: kUnknown
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662769560, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7151644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7115546, "index_size": 19476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 163416, "raw_average_key_size": 26, "raw_value_size": 7007512, "raw_average_value_size": 1128, "num_data_blocks": 760, "num_entries": 6208, "num_filter_entries": 6208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.769941) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7151644 bytes
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.772474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 98.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(25.2) write-amplify(10.1) OK, records in: 6699, records dropped: 491 output_compression: NoCompression
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.772507) EVENT_LOG_v1 {"time_micros": 1769843662772492, "job": 30, "event": "compaction_finished", "compaction_time_micros": 72743, "compaction_time_cpu_micros": 16789, "output_level": 6, "num_output_files": 1, "total_output_size": 7151644, "num_input_records": 6699, "num_output_records": 6208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662772850, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662774730, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.696630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.775023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.775032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.775036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.775040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:14:22.775044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:22 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:24 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:25 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:25.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:26 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:26.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:27 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:28 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:28 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:29 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:29.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:30 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:31 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:32.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:33 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:33 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:36 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:37 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:37 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:37 np0005603541 nova_compute[245601]: 2026-01-31 07:14:37.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:37 np0005603541 nova_compute[245601]: 2026-01-31 07:14:37.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:14:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:38 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:38 np0005603541 nova_compute[245601]: 2026-01-31 07:14:38.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:38 np0005603541 nova_compute[245601]: 2026-01-31 07:14:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:14:38 np0005603541 nova_compute[245601]: 2026-01-31 07:14:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:14:38 np0005603541 nova_compute[245601]: 2026-01-31 07:14:38.641 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:14:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:39 np0005603541 nova_compute[245601]: 2026-01-31 07:14:39.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:39 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:40 np0005603541 nova_compute[245601]: 2026-01-31 07:14:40.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:40 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:41 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.657 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.658 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.658 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.658 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:14:41 np0005603541 nova_compute[245601]: 2026-01-31 07:14:41.659 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659911957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.062 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.213 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.214 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5207MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.214 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.215 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.285 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.286 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.299 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:14:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:14:42 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319490362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.735 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.742 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.766 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.767 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:14:42 np0005603541 nova_compute[245601]: 2026-01-31 07:14:42.768 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:14:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:43 np0005603541 podman[253530]: 2026-01-31 07:14:43.011035953 +0000 UTC m=+0.048124474 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:14:43 np0005603541 podman[253529]: 2026-01-31 07:14:43.034585666 +0000 UTC m=+0.074215389 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:14:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:43 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:43 np0005603541 nova_compute[245601]: 2026-01-31 07:14:43.764 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:43 np0005603541 nova_compute[245601]: 2026-01-31 07:14:43.764 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:44 np0005603541 nova_compute[245601]: 2026-01-31 07:14:44.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:14:44 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:45 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:47 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:47 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:14:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:14:49
Jan 31 02:14:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:14:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:14:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images']
Jan 31 02:14:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:14:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:49.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:49 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:50 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:51 np0005603541 podman[253747]: 2026-01-31 07:14:51.535322161 +0000 UTC m=+0.070047978 container exec ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:14:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:51.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:51 np0005603541 podman[253747]: 2026-01-31 07:14:51.65720384 +0000 UTC m=+0.191929657 container exec_died ea2bfa4270509f4952b7ea8bc34bd400446ee050de63708e950df7ca9416155d (image=quay.io/ceph/ceph:v18, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:14:51 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:14:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:14:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 podman[253903]: 2026-01-31 07:14:52.212854165 +0000 UTC m=+0.052058319 container exec eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 02:14:52 np0005603541 podman[253903]: 2026-01-31 07:14:52.218853791 +0000 UTC m=+0.058057895 container exec_died eef4c6c0771b3ab214ec69cc1ccd975318b9870467bbbbcc8dc590f308d1c358 (image=quay.io/ceph/haproxy:2.3, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-haproxy-rgw-default-compute-0-dsjekd)
Jan 31 02:14:52 np0005603541 podman[253970]: 2026-01-31 07:14:52.407774573 +0000 UTC m=+0.054480578 container exec a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 02:14:52 np0005603541 podman[253970]: 2026-01-31 07:14:52.444020246 +0000 UTC m=+0.090726261 container exec_died a633cad4914240539f641aad4ec51dbc10339db6c6194e4cfd24bb3600712ff8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-keepalived-rgw-default-compute-0-kqakbv, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 02:14:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:52.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:52 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:53 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d9c6b749-d61b-4e7a-9e2c-689a56ce889c does not exist
Jan 31 02:14:53 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 68d916fd-1c4e-4dda-a1e2-73c0f62e7bef does not exist
Jan 31 02:14:53 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev b1ef3712-df30-4bb8-9c09-f202fab81385 does not exist
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:14:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:53.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.644852767 +0000 UTC m=+0.044442993 container create f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:14:53 np0005603541 systemd[1]: Started libpod-conmon-f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447.scope.
Jan 31 02:14:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.624166984 +0000 UTC m=+0.023757220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.724403215 +0000 UTC m=+0.123993471 container init f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.730793251 +0000 UTC m=+0.130383487 container start f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:14:53 np0005603541 systemd[1]: libpod-f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447.scope: Deactivated successfully.
Jan 31 02:14:53 np0005603541 busy_tesla[254295]: 167 167
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.736489969 +0000 UTC m=+0.136080255 container attach f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:14:53 np0005603541 conmon[254295]: conmon f7cfa4f80ff4ce0d070a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447.scope/container/memory.events
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.736901929 +0000 UTC m=+0.136492175 container died f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:14:53 np0005603541 systemd[1]: var-lib-containers-storage-overlay-7fa592ec2459eff6862f25e300255bb0c034821c80bab01e2c4733d7c4f34b22-merged.mount: Deactivated successfully.
Jan 31 02:14:53 np0005603541 podman[254278]: 2026-01-31 07:14:53.775720355 +0000 UTC m=+0.175310581 container remove f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tesla, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:14:53 np0005603541 systemd[1]: libpod-conmon-f7cfa4f80ff4ce0d070a109dddcc70f9be709d9854f3cbeb466898a8e359b447.scope: Deactivated successfully.
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:53 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:14:53 np0005603541 podman[254319]: 2026-01-31 07:14:53.942257022 +0000 UTC m=+0.047833396 container create 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:53 np0005603541 systemd[1]: Started libpod-conmon-1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145.scope.
Jan 31 02:14:53 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:53 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:54 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:54.012409371 +0000 UTC m=+0.117985765 container init 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:53.921162688 +0000 UTC m=+0.026739112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:54.021408499 +0000 UTC m=+0.126984873 container start 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:54.026226537 +0000 UTC m=+0.131802901 container attach 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:14:54 np0005603541 elegant_dirac[254335]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:14:54 np0005603541 elegant_dirac[254335]: --> relative data size: 1.0
Jan 31 02:14:54 np0005603541 elegant_dirac[254335]: --> All data devices are unavailable
Jan 31 02:14:54 np0005603541 systemd[1]: libpod-1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145.scope: Deactivated successfully.
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:54.792202686 +0000 UTC m=+0.897779150 container died 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:54 np0005603541 systemd[1]: var-lib-containers-storage-overlay-9ed5a570455b41edf281ee4ca4ebd23169664d215d6143deb3560c44ab2df3e1-merged.mount: Deactivated successfully.
Jan 31 02:14:54 np0005603541 podman[254319]: 2026-01-31 07:14:54.864304062 +0000 UTC m=+0.969880486 container remove 1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_dirac, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:14:54 np0005603541 systemd[1]: libpod-conmon-1763929ca35df0e30aac00fd275456a5c2c8e2abe9cb7062e69521de7a8b0145.scope: Deactivated successfully.
Jan 31 02:14:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:54 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.466067661 +0000 UTC m=+0.039779290 container create df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:14:55 np0005603541 systemd[1]: Started libpod-conmon-df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a.scope.
Jan 31 02:14:55 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.53499287 +0000 UTC m=+0.108704519 container init df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.54034101 +0000 UTC m=+0.114052619 container start df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:14:55 np0005603541 kind_solomon[254519]: 167 167
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.543786645 +0000 UTC m=+0.117498334 container attach df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:14:55 np0005603541 systemd[1]: libpod-df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a.scope: Deactivated successfully.
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.544287926 +0000 UTC m=+0.117999535 container died df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.450472761 +0000 UTC m=+0.024184380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:55 np0005603541 systemd[1]: var-lib-containers-storage-overlay-8d48ecffce5aac7b3bcb0996be86d6ade6f2833a94244009267b2ee97a443505-merged.mount: Deactivated successfully.
Jan 31 02:14:55 np0005603541 podman[254502]: 2026-01-31 07:14:55.582530858 +0000 UTC m=+0.156242477 container remove df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:14:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:55 np0005603541 systemd[1]: libpod-conmon-df340e285ed1742e3901dfa7c227a06f4c700291373dab870c4688873ebfb39a.scope: Deactivated successfully.
Jan 31 02:14:55 np0005603541 podman[254543]: 2026-01-31 07:14:55.72095477 +0000 UTC m=+0.037856273 container create fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:14:55 np0005603541 systemd[1]: Started libpod-conmon-fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa.scope.
Jan 31 02:14:55 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:55 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23acef20f77a9875dd4d9bc689afcb58efb6298f292138209f69105625436999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:55 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23acef20f77a9875dd4d9bc689afcb58efb6298f292138209f69105625436999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:55 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23acef20f77a9875dd4d9bc689afcb58efb6298f292138209f69105625436999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:55 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23acef20f77a9875dd4d9bc689afcb58efb6298f292138209f69105625436999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:55 np0005603541 podman[254543]: 2026-01-31 07:14:55.800492867 +0000 UTC m=+0.117394380 container init fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:14:55 np0005603541 podman[254543]: 2026-01-31 07:14:55.706459067 +0000 UTC m=+0.023360580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:55 np0005603541 podman[254543]: 2026-01-31 07:14:55.806925124 +0000 UTC m=+0.123826637 container start fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:14:55 np0005603541 podman[254543]: 2026-01-31 07:14:55.810789478 +0000 UTC m=+0.127690971 container attach fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:14:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:56.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]: {
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:    "0": [
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:        {
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "devices": [
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "/dev/loop3"
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            ],
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "lv_name": "ceph_lv0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "lv_size": "7511998464",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "name": "ceph_lv0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "tags": {
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.cluster_name": "ceph",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.crush_device_class": "",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.encrypted": "0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.osd_id": "0",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.type": "block",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:                "ceph.vdo": "0"
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            },
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "type": "block",
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:            "vg_name": "ceph_vg0"
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:        }
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]:    ]
Jan 31 02:14:56 np0005603541 sleepy_yalow[254561]: }
Jan 31 02:14:56 np0005603541 systemd[1]: libpod-fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa.scope: Deactivated successfully.
Jan 31 02:14:56 np0005603541 podman[254570]: 2026-01-31 07:14:56.609336321 +0000 UTC m=+0.026038056 container died fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:14:56 np0005603541 systemd[1]: var-lib-containers-storage-overlay-23acef20f77a9875dd4d9bc689afcb58efb6298f292138209f69105625436999-merged.mount: Deactivated successfully.
Jan 31 02:14:56 np0005603541 podman[254570]: 2026-01-31 07:14:56.666939674 +0000 UTC m=+0.083641379 container remove fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:14:56 np0005603541 systemd[1]: libpod-conmon-fee299469d83c0f3ab6b75109b425f77104f857fddeb315156c7211f2142b0aa.scope: Deactivated successfully.
Jan 31 02:14:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.225243514 +0000 UTC m=+0.040913879 container create edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:14:57 np0005603541 systemd[1]: Started libpod-conmon-edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4.scope.
Jan 31 02:14:57 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.206133868 +0000 UTC m=+0.021804253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.304224547 +0000 UTC m=+0.119894912 container init edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.31046387 +0000 UTC m=+0.126134255 container start edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.315314488 +0000 UTC m=+0.130984833 container attach edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:14:57 np0005603541 stupefied_mahavira[254742]: 167 167
Jan 31 02:14:57 np0005603541 systemd[1]: libpod-edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4.scope: Deactivated successfully.
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.317836759 +0000 UTC m=+0.133507104 container died edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:14:57 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:57 np0005603541 systemd[1]: var-lib-containers-storage-overlay-e646ae73797793072829cfe08ef11bf67d9c76142b736bbef55489cd28e9c9c3-merged.mount: Deactivated successfully.
Jan 31 02:14:57 np0005603541 podman[254727]: 2026-01-31 07:14:57.372696845 +0000 UTC m=+0.188367190 container remove edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:14:57 np0005603541 systemd[1]: libpod-conmon-edba721009981fe14641defe102a7c02ca898c7fb0c179202e2ea9a28684aea4.scope: Deactivated successfully.
Jan 31 02:14:57 np0005603541 podman[254768]: 2026-01-31 07:14:57.538038133 +0000 UTC m=+0.054526139 container create c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:14:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:57 np0005603541 systemd[1]: Started libpod-conmon-c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b.scope.
Jan 31 02:14:57 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:14:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a8b7e0bc9cb5c45906e2199628220b80ababc03f2dc6fb9a78e06f22fa864c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a8b7e0bc9cb5c45906e2199628220b80ababc03f2dc6fb9a78e06f22fa864c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a8b7e0bc9cb5c45906e2199628220b80ababc03f2dc6fb9a78e06f22fa864c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:57 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a8b7e0bc9cb5c45906e2199628220b80ababc03f2dc6fb9a78e06f22fa864c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:14:57 np0005603541 podman[254768]: 2026-01-31 07:14:57.514701864 +0000 UTC m=+0.031189890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:14:57 np0005603541 podman[254768]: 2026-01-31 07:14:57.633452287 +0000 UTC m=+0.149940263 container init c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:57 np0005603541 podman[254768]: 2026-01-31 07:14:57.639896224 +0000 UTC m=+0.156384220 container start c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:14:57 np0005603541 podman[254768]: 2026-01-31 07:14:57.643952193 +0000 UTC m=+0.160440179 container attach c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:14:58 np0005603541 hungry_easley[254784]: {
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:        "osd_id": 0,
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:        "type": "bluestore"
Jan 31 02:14:58 np0005603541 hungry_easley[254784]:    }
Jan 31 02:14:58 np0005603541 hungry_easley[254784]: }
Jan 31 02:14:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:58.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:58 np0005603541 systemd[1]: libpod-c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b.scope: Deactivated successfully.
Jan 31 02:14:58 np0005603541 podman[254768]: 2026-01-31 07:14:58.495748612 +0000 UTC m=+1.012236618 container died c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:14:58 np0005603541 systemd[1]: var-lib-containers-storage-overlay-69a8b7e0bc9cb5c45906e2199628220b80ababc03f2dc6fb9a78e06f22fa864c-merged.mount: Deactivated successfully.
Jan 31 02:14:58 np0005603541 podman[254768]: 2026-01-31 07:14:58.552829753 +0000 UTC m=+1.069317739 container remove c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:14:58 np0005603541 systemd[1]: libpod-conmon-c1ee5489014d78a330290300665e805d288050d3540b4d977165b20cec374d1b.scope: Deactivated successfully.
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:14:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:58 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 8c75bdfd-09ca-4b25-86cc-37e9490e0a6d does not exist
Jan 31 02:14:58 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev fefa7288-a6e0-4215-a6e2-e9b0ff4e251d does not exist
Jan 31 02:14:58 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev d5e4d7c6-e0cf-4a2a-a37a-46b11d309d4f does not exist
Jan 31 02:14:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:14:59 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:14:59 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:59 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:14:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:14:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:01.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:03 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:03.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:04 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:04.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:05 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:05.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:06 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:07 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:07.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:08 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:08.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:09 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:09.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003727767890377815 of space, bias 1.0, pg target 0.11183303671133446 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:15:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:10.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:10 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:11 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:11.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:12.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:12 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:13.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:13 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:13 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:13 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:14 np0005603541 podman[254928]: 2026-01-31 07:15:14.023574948 +0000 UTC m=+0.062730720 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:15:14 np0005603541 podman[254927]: 2026-01-31 07:15:14.053958618 +0000 UTC m=+0.092443263 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:15:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:14.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:14 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:15.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:15 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:16 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:17.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:17 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:18 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:19 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:15:20.144 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:15:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:15:20.145 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:15:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:15:20.145 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:15:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:21 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:22.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:23 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:23 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:23.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:24 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:25 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:26 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:26.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:27 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:28 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:28 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:29 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:29.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:30 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:31 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:31.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:33.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:34 np0005603541 nova_compute[245601]: 2026-01-31 07:15:34.622 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:34 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:35.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:35 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:36 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:37.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:38 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:38 np0005603541 nova_compute[245601]: 2026-01-31 07:15:38.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:38 np0005603541 nova_compute[245601]: 2026-01-31 07:15:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:15:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:39 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:39.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:40 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:40 np0005603541 nova_compute[245601]: 2026-01-31 07:15:40.627 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:40 np0005603541 nova_compute[245601]: 2026-01-31 07:15:40.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:15:40 np0005603541 nova_compute[245601]: 2026-01-31 07:15:40.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:15:40 np0005603541 nova_compute[245601]: 2026-01-31 07:15:40.660 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:15:40 np0005603541 nova_compute[245601]: 2026-01-31 07:15:40.661 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:41 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:41 np0005603541 nova_compute[245601]: 2026-01-31 07:15:41.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:41 np0005603541 nova_compute[245601]: 2026-01-31 07:15:41.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:42 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.622 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:43.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.662 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.663 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.663 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.663 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:15:43 np0005603541 nova_compute[245601]: 2026-01-31 07:15:43.663 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:15:43 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.111 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.257 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.259 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5226MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.259 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.259 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.365 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.366 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.385 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:15:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:44 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:15:44 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875464181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.802 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.806 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:15:44 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.829 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.830 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:15:44 np0005603541 nova_compute[245601]: 2026-01-31 07:15:44.830 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:15:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:45 np0005603541 podman[255132]: 2026-01-31 07:15:45.042295504 +0000 UTC m=+0.076597686 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:15:45 np0005603541 podman[255133]: 2026-01-31 07:15:45.042470898 +0000 UTC m=+0.073567672 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:15:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:45.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:45 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:46 np0005603541 nova_compute[245601]: 2026-01-31 07:15:46.833 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:46 np0005603541 nova_compute[245601]: 2026-01-31 07:15:46.833 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:15:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:47.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:47 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:47 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:15:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:15:49
Jan 31 02:15:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:15:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:15:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log']
Jan 31 02:15:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:15:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:49 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:50.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:50 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:51.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:51 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:53 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:53 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:53.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:54 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:54.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:15:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:55 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:57 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:57.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:58 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:15:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:15:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:15:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:15:59 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 342c3ff9-47d1-4dce-9a87-340471b0a200 does not exist
Jan 31 02:15:59 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 4a240985-445b-46b4-b21c-6784b4ea791d does not exist
Jan 31 02:15:59 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev beac1750-cd4e-40ff-aa1f-f7e18fbacba5 does not exist
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:15:59 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.241963812 +0000 UTC m=+0.060223518 container create b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:16:00 np0005603541 systemd[1]: Started libpod-conmon-b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab.scope.
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.205790561 +0000 UTC m=+0.024050347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:16:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:16:00 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.354907814 +0000 UTC m=+0.173167590 container init b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.362634302 +0000 UTC m=+0.180894038 container start b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:16:00 np0005603541 nervous_visvesvaraya[255523]: 167 167
Jan 31 02:16:00 np0005603541 systemd[1]: libpod-b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab.scope: Deactivated successfully.
Jan 31 02:16:00 np0005603541 conmon[255523]: conmon b2d31dd032a73fcfff63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab.scope/container/memory.events
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.370973195 +0000 UTC m=+0.189232951 container attach b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.371747094 +0000 UTC m=+0.190006800 container died b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:16:00 np0005603541 systemd[1]: var-lib-containers-storage-overlay-a65e429ad6cac9d33dd4a0af74fd1b3a563c4e25aee3dd1d46f7cb35734ca7b0-merged.mount: Deactivated successfully.
Jan 31 02:16:00 np0005603541 podman[255507]: 2026-01-31 07:16:00.487810511 +0000 UTC m=+0.306070247 container remove b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_visvesvaraya, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:16:00 np0005603541 systemd[1]: libpod-conmon-b2d31dd032a73fcfff63ff491733dcf323f11f2555dde1c995d00519d823cbab.scope: Deactivated successfully.
Jan 31 02:16:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:00.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:00 np0005603541 podman[255547]: 2026-01-31 07:16:00.681159701 +0000 UTC m=+0.053991487 container create bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:16:00 np0005603541 systemd[1]: Started libpod-conmon-bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807.scope.
Jan 31 02:16:00 np0005603541 podman[255547]: 2026-01-31 07:16:00.655208828 +0000 UTC m=+0.028040694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:00 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:00 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:00 np0005603541 podman[255547]: 2026-01-31 07:16:00.80798085 +0000 UTC m=+0.180812736 container init bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:16:00 np0005603541 podman[255547]: 2026-01-31 07:16:00.824097123 +0000 UTC m=+0.196928949 container start bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:16:00 np0005603541 podman[255547]: 2026-01-31 07:16:00.82970365 +0000 UTC m=+0.202535436 container attach bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:16:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:01 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:16:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254534780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:16:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:16:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254534780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:16:01 np0005603541 gracious_lehmann[255564]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:16:01 np0005603541 gracious_lehmann[255564]: --> relative data size: 1.0
Jan 31 02:16:01 np0005603541 gracious_lehmann[255564]: --> All data devices are unavailable
Jan 31 02:16:01 np0005603541 systemd[1]: libpod-bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807.scope: Deactivated successfully.
Jan 31 02:16:01 np0005603541 conmon[255564]: conmon bb08e470b18cdb8da477 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807.scope/container/memory.events
Jan 31 02:16:01 np0005603541 podman[255547]: 2026-01-31 07:16:01.554459753 +0000 UTC m=+0.927291539 container died bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:16:01 np0005603541 systemd[1]: var-lib-containers-storage-overlay-cbfe687e2c746515dad6a91792d21de1e1cc5ffacf6b6eb6660b46d808185e99-merged.mount: Deactivated successfully.
Jan 31 02:16:01 np0005603541 podman[255547]: 2026-01-31 07:16:01.605904407 +0000 UTC m=+0.978736193 container remove bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:16:01 np0005603541 systemd[1]: libpod-conmon-bb08e470b18cdb8da477f855515ca7987ae7b46c12630594914ef510787dc807.scope: Deactivated successfully.
Jan 31 02:16:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.256046875 +0000 UTC m=+0.045871139 container create 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:16:02 np0005603541 systemd[1]: Started libpod-conmon-441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c.scope.
Jan 31 02:16:02 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.23331565 +0000 UTC m=+0.023139934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.339244831 +0000 UTC m=+0.129069115 container init 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.3453657 +0000 UTC m=+0.135189954 container start 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.351237543 +0000 UTC m=+0.141061817 container attach 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:16:02 np0005603541 keen_pike[255748]: 167 167
Jan 31 02:16:02 np0005603541 systemd[1]: libpod-441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c.scope: Deactivated successfully.
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.353076808 +0000 UTC m=+0.142901062 container died 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:16:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:02 np0005603541 systemd[1]: var-lib-containers-storage-overlay-34b1037900278b116d335e833c225bdacabbdf42e0a726b56372fb79576caecb-merged.mount: Deactivated successfully.
Jan 31 02:16:02 np0005603541 podman[255732]: 2026-01-31 07:16:02.398259018 +0000 UTC m=+0.188083272 container remove 441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:16:02 np0005603541 systemd[1]: libpod-conmon-441acee710355bafce66d57624214c0991a149f4992da5982ce40d2162eca61c.scope: Deactivated successfully.
Jan 31 02:16:02 np0005603541 podman[255772]: 2026-01-31 07:16:02.529558627 +0000 UTC m=+0.044476165 container create 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:16:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:02 np0005603541 systemd[1]: Started libpod-conmon-751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69.scope.
Jan 31 02:16:02 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fa51fd6b2f69c75f9d2415d698ac231f5e9ce9999be108a693f2ed21629beb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fa51fd6b2f69c75f9d2415d698ac231f5e9ce9999be108a693f2ed21629beb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fa51fd6b2f69c75f9d2415d698ac231f5e9ce9999be108a693f2ed21629beb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:02 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fa51fd6b2f69c75f9d2415d698ac231f5e9ce9999be108a693f2ed21629beb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:02 np0005603541 podman[255772]: 2026-01-31 07:16:02.50750957 +0000 UTC m=+0.022427128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:02 np0005603541 podman[255772]: 2026-01-31 07:16:02.615533831 +0000 UTC m=+0.130451389 container init 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:16:02 np0005603541 podman[255772]: 2026-01-31 07:16:02.620319548 +0000 UTC m=+0.135237086 container start 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:16:02 np0005603541 podman[255772]: 2026-01-31 07:16:02.624162601 +0000 UTC m=+0.139080149 container attach 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:16:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]: {
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:    "0": [
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:        {
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "devices": [
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "/dev/loop3"
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            ],
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "lv_name": "ceph_lv0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "lv_size": "7511998464",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "lv_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "name": "ceph_lv0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "tags": {
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.block_uuid": "wZEPpX-bDpY-R2Wk-tA8R-VUWD-Mz03-ZcZU6j",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.cluster_name": "ceph",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.crush_device_class": "",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.encrypted": "0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.osd_fsid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.osd_id": "0",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.type": "block",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:                "ceph.vdo": "0"
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            },
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "type": "block",
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:            "vg_name": "ceph_vg0"
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:        }
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]:    ]
Jan 31 02:16:03 np0005603541 peaceful_wozniak[255788]: }
Jan 31 02:16:03 np0005603541 systemd[1]: libpod-751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69.scope: Deactivated successfully.
Jan 31 02:16:03 np0005603541 podman[255772]: 2026-01-31 07:16:03.40736691 +0000 UTC m=+0.922284448 container died 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:16:03 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:03 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:03 np0005603541 systemd[1]: var-lib-containers-storage-overlay-85fa51fd6b2f69c75f9d2415d698ac231f5e9ce9999be108a693f2ed21629beb-merged.mount: Deactivated successfully.
Jan 31 02:16:03 np0005603541 podman[255772]: 2026-01-31 07:16:03.457187603 +0000 UTC m=+0.972105141 container remove 751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:16:03 np0005603541 systemd[1]: libpod-conmon-751b32e697d38bbaa8ac452a19f7c7efb5b682ce2f1123859eeb4c8099d6cf69.scope: Deactivated successfully.
Jan 31 02:16:03 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:03 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:03 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.038259747 +0000 UTC m=+0.054355794 container create bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:16:04 np0005603541 systemd[1]: Started libpod-conmon-bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a.scope.
Jan 31 02:16:04 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.021169011 +0000 UTC m=+0.037265068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.126099637 +0000 UTC m=+0.142195684 container init bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.132376431 +0000 UTC m=+0.148472468 container start bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:16:04 np0005603541 crazy_kare[255968]: 167 167
Jan 31 02:16:04 np0005603541 systemd[1]: libpod-bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a.scope: Deactivated successfully.
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.136811019 +0000 UTC m=+0.152907076 container attach bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.137537166 +0000 UTC m=+0.153633233 container died bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:16:04 np0005603541 systemd[1]: var-lib-containers-storage-overlay-cf1c4b2017934fe6b2106d3d591b795f1363bad71bbe5931633371f1e80e4b70-merged.mount: Deactivated successfully.
Jan 31 02:16:04 np0005603541 podman[255952]: 2026-01-31 07:16:04.178024732 +0000 UTC m=+0.194120779 container remove bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:16:04 np0005603541 systemd[1]: libpod-conmon-bc5e58f7dd6347743d6caf9ce4106166ded277aacb0e0f82698352d3bc3ddd4a.scope: Deactivated successfully.
Jan 31 02:16:04 np0005603541 podman[255992]: 2026-01-31 07:16:04.323871735 +0000 UTC m=+0.045575041 container create 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:16:04 np0005603541 systemd[1]: Started libpod-conmon-44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d.scope.
Jan 31 02:16:04 np0005603541 systemd[1]: Started libcrun container.
Jan 31 02:16:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23d349bf6c72a37521f021639d4096856ba5e312ce890132b9ecbb1a09505fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23d349bf6c72a37521f021639d4096856ba5e312ce890132b9ecbb1a09505fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23d349bf6c72a37521f021639d4096856ba5e312ce890132b9ecbb1a09505fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:04 np0005603541 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b23d349bf6c72a37521f021639d4096856ba5e312ce890132b9ecbb1a09505fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:04 np0005603541 podman[255992]: 2026-01-31 07:16:04.308310846 +0000 UTC m=+0.030014162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:16:04 np0005603541 podman[255992]: 2026-01-31 07:16:04.425709056 +0000 UTC m=+0.147412372 container init 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:16:04 np0005603541 podman[255992]: 2026-01-31 07:16:04.431720162 +0000 UTC m=+0.153423458 container start 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:16:04 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:04 np0005603541 podman[255992]: 2026-01-31 07:16:04.436434157 +0000 UTC m=+0.158137453 container attach 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:16:04 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:04 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:04 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:04 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:05 np0005603541 magical_moore[256009]: {
Jan 31 02:16:05 np0005603541 magical_moore[256009]:    "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b": {
Jan 31 02:16:05 np0005603541 magical_moore[256009]:        "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 02:16:05 np0005603541 magical_moore[256009]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:16:05 np0005603541 magical_moore[256009]:        "osd_id": 0,
Jan 31 02:16:05 np0005603541 magical_moore[256009]:        "osd_uuid": "ca68ee2b-d68d-4c9f-93f3-a44e935f6e3b",
Jan 31 02:16:05 np0005603541 magical_moore[256009]:        "type": "bluestore"
Jan 31 02:16:05 np0005603541 magical_moore[256009]:    }
Jan 31 02:16:05 np0005603541 magical_moore[256009]: }
Jan 31 02:16:05 np0005603541 systemd[1]: libpod-44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d.scope: Deactivated successfully.
Jan 31 02:16:05 np0005603541 podman[255992]: 2026-01-31 07:16:05.201954115 +0000 UTC m=+0.923657411 container died 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:16:05 np0005603541 systemd[1]: var-lib-containers-storage-overlay-b23d349bf6c72a37521f021639d4096856ba5e312ce890132b9ecbb1a09505fd-merged.mount: Deactivated successfully.
Jan 31 02:16:05 np0005603541 podman[255992]: 2026-01-31 07:16:05.273878857 +0000 UTC m=+0.995582163 container remove 44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:16:05 np0005603541 systemd[1]: libpod-conmon-44b70bd8977bffccccaa4dd79600c2d3ddc128d3b63e8935c82a4ce624e32b2d.scope: Deactivated successfully.
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:16:05 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 4483348a-6a82-4e17-b66a-026155a5f7d6 does not exist
Jan 31 02:16:05 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 1908bb77-0a58-4e0c-bb9f-f18cfb72f87f does not exist
Jan 31 02:16:05 np0005603541 ceph-mgr[74648]: [progress WARNING root] complete: ev 17250b78-9d96-4494-baa7-a53731096c5a does not exist
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:05 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:16:05 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:05 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:05 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:06 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:06 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:06 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:06.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:06 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:06 np0005603541 ceph-mon[74355]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 02:16:06 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:07 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:07 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:07 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:07.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:07 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:07 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:07 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:08 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:08 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:08 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:08.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:08 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:08 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:08 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:09 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:09 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:09 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:09 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003727767890377815 of space, bias 1.0, pg target 0.11183303671133446 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:16:10 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:10 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:10 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:10 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:10 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:11 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:11 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:11 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.880570) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771880772, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1569, "num_deletes": 251, "total_data_size": 2163233, "memory_usage": 2204856, "flush_reason": "Manual Compaction"}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771902543, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2107029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26099, "largest_seqno": 27667, "table_properties": {"data_size": 2100265, "index_size": 3579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17404, "raw_average_key_size": 21, "raw_value_size": 2085362, "raw_average_value_size": 2527, "num_data_blocks": 157, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843662, "oldest_key_time": 1769843662, "file_creation_time": 1769843771, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22003 microseconds, and 8184 cpu microseconds.
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.902666) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2107029 bytes OK
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.902697) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907930) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907962) EVENT_LOG_v1 {"time_micros": 1769843771907952, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2156271, prev total WAL file size 2156271, number of live WAL files 2.
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.908957) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2057KB)], [59(6984KB)]
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771909000, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 9258673, "oldest_snapshot_seqno": -1}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6516 keys, 7630245 bytes, temperature: kUnknown
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771962077, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 7630245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7592012, "index_size": 20843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171250, "raw_average_key_size": 26, "raw_value_size": 7478306, "raw_average_value_size": 1147, "num_data_blocks": 817, "num_entries": 6516, "num_filter_entries": 6516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842016, "oldest_key_time": 0, "file_creation_time": 1769843771, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22587319-adf7-48dc-8223-5e2f596ebaec", "db_session_id": "F9FZJBU69XSJM19R5DYZ", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.962985) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7630245 bytes
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.966034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.0 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 7033, records dropped: 517 output_compression: NoCompression
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.966055) EVENT_LOG_v1 {"time_micros": 1769843771966044, "job": 32, "event": "compaction_finished", "compaction_time_micros": 53210, "compaction_time_cpu_micros": 28525, "output_level": 6, "num_output_files": 1, "total_output_size": 7630245, "num_input_records": 7033, "num_output_records": 6516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771966526, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771967699, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.908828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.967807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.967814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.967816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.967817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:11 np0005603541 ceph-mon[74355]: rocksdb: (Original Log Time 2026/01/31-07:16:11.967819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:16:12 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:12 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:12 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:12.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:12 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:12 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:12 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:12 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:12 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:13 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:13 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:13 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:13.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:13 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:14 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:14 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:14 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:14 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:14 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:15 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:15 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:15 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:15.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:16 np0005603541 podman[256100]: 2026-01-31 07:16:16.061436199 +0000 UTC m=+0.084431378 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 31 02:16:16 np0005603541 podman[256099]: 2026-01-31 07:16:16.141463478 +0000 UTC m=+0.173757693 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:16:16 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:16 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:16 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:16 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:16.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:16 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:17 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:17 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1519 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:17 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:17 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:17 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:17 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:18 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:18 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1519 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:18 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:18 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:18 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:18.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:18 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:19 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:19 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:19 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:19 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:16:20.146 158874 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:16:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:16:20.146 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:16:20 np0005603541 ovn_metadata_agent[158869]: 2026-01-31 07:16:20.146 158874 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:16:20 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:20 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:20 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:20 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:20 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:21 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:21 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:21 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:21 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:21.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:22 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:22 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:22 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:22.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:22 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:22 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1524 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:22 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:22 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:23 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:23 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1524 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:23 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:23 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:23 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:16:23 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:23.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:23 np0005603541 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:16:24 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:24 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:24 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:24.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:24 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:24 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:25 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:25 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:25 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:25 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:25 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:25.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:26 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:26 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:26 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:26.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:26 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:26 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:27 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:27 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:27 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:27 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:27 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:27.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:27 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:27 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:28 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:28 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:28 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:28.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:28 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:29 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:29 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:29 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:29 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:30 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:30 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:30 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:30 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:30 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:31 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:31 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:31 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:31 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:32 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:32 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:32 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:32 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:32.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:32 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:32 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:32 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:33 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:33 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:33 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:33 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:33 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:34 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:34 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:34 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:34.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:34 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:34 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:35 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:35 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:35 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:35 np0005603541 systemd-logind[817]: New session 52 of user zuul.
Jan 31 02:16:35 np0005603541 systemd[1]: Started Session 52 of User zuul.
Jan 31 02:16:36 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:36 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:36 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:36 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:36 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:37 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:37 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:37 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:37 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:37 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:16:37 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:16:38 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24697 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:38 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:38 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:38 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:38 np0005603541 nova_compute[245601]: 2026-01-31 07:16:38.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:38 np0005603541 nova_compute[245601]: 2026-01-31 07:16:38.627 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:16:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:38 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:38 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:38 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14913 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:38 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24680 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:38 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:39 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14919 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:39 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 02:16:39 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234553437' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 02:16:39 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:39 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:39 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:39.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:39 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:40 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24724 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:40 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:40 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:40 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:40 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24730 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:40 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:41 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:41 np0005603541 nova_compute[245601]: 2026-01-31 07:16:41.627 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:41 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:41 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:41 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:42 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:42 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:42 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:42 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.626 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.626 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.628 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.669 245605 DEBUG nova.compute.manager [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.669 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:42 np0005603541 nova_compute[245601]: 2026-01-31 07:16:42.670 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:42 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:42 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:42 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:43 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:43 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:43 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:43 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:43 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:44 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:44 np0005603541 ovs-vsctl[256590]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 02:16:44 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:44 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:44 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:44.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:44 np0005603541 nova_compute[245601]: 2026-01-31 07:16:44.665 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:44 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:45 np0005603541 virtqemud[245931]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 02:16:45 np0005603541 virtqemud[245931]: hostname: compute-0
Jan 31 02:16:45 np0005603541 virtqemud[245931]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 02:16:45 np0005603541 virtqemud[245931]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 02:16:45 np0005603541 virtqemud[245931]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 02:16:45 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:45 np0005603541 nova_compute[245601]: 2026-01-31 07:16:45.625 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:45 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:45 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:45 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:45 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: cache status {prefix=cache status} (starting...)
Jan 31 02:16:45 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:45 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: client ls {prefix=client ls} (starting...)
Jan 31 02:16:45 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:45 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24701 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.050 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.050 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.050 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.051 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.051 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:16:46 np0005603541 podman[256914]: 2026-01-31 07:16:46.256755769 +0000 UTC m=+0.096772518 container health_status ef25073dd3088188d836f657d863cba26de5128ab18b53a720dbff74066c1e94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 02:16:46 np0005603541 lvm[256991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:16:46 np0005603541 lvm[256991]: VG ceph_vg0 finished
Jan 31 02:16:46 np0005603541 podman[256918]: 2026-01-31 07:16:46.307738701 +0000 UTC m=+0.130614702 container health_status 55b3c96d172ae2621c27cb370d5834953e7bfd07a38e86bb9c8a9992e1ea3cfe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447-93394a7ee7a783f28b0f71ff48cf372f36b44b5814bc3b46c334ecc1ba2ed447'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 02:16:46 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 02:16:46 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 02:16:46 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24716 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:46 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.548 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:16:46 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:46 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:46 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:46.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:46 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14934 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.717 245605 WARNING nova.virt.libvirt.driver [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.718 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5151MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.718 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:16:46 np0005603541 nova_compute[245601]: 2026-01-31 07:16:46.718 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 02:16:46 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:46 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012174330' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 02:16:47 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14943 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.104 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.104 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.127 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:47 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24743 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:47 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:47 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:47.294+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486489742' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379256453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.584 245605 DEBUG oslo_concurrency.processutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.592 245605 DEBUG nova.compute.provider_tree [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 7666a20e-f730-4016-ad1a-a5df3a106dcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.642 245605 DEBUG nova.scheduler.client.report [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Inventory has not changed for provider 7666a20e-f730-4016-ad1a-a5df3a106dcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.644 245605 DEBUG nova.compute.resource_tracker [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:16:47 np0005603541 nova_compute[245601]: 2026-01-31 07:16:47.644 245605 DEBUG oslo_concurrency.lockutils [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:47 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:47 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:47 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:47 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.14973 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:47 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:47 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:47.788+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6701442' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:47 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: ops {prefix=ops} (starting...)
Jan 31 02:16:47 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24784 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24773 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962392147' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218391499' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24799 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:48 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:48 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:48 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:48.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24788 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: session ls {prefix=session ls} (starting...)
Jan 31 02:16:48 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes Can't run that command on an inactive MDS!
Jan 31 02:16:48 np0005603541 nova_compute[245601]: 2026-01-31 07:16:48.644 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:48 np0005603541 nova_compute[245601]: 2026-01-31 07:16:48.645 245605 DEBUG oslo_service.periodic_task [None req-e03475ef-e782-4f4a-b8c2-f3e6ead1e2f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mds[93426]: mds.cephfs.compute-0.kanoes asok_command: status {prefix=status} (starting...)
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032169188' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:48 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:48 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15024 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Optimize plan auto_2026-01-31_07:16:49
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] do_upmap
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24832 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:49 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:49.280+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871985638' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954332218' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334420105' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 02:16:49 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:49 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:49 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24845 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:49 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:49 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:49.961+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:49 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772346024' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24859 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343418582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24871 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15072 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:50 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:50.570+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:50 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:50 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:50 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:50.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 02:16:50 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275701586' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24887 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:50 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314684662' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246216183' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24905 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24920 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:51 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:51 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:51 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24940 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:51 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:51.995+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:51 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] exit Reset 0.000337 1 0.000452
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62 pruub=15.869207382s) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY pruub 152.346649170s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.036592 5 0.000414
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.036279 5 0.000739
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.036262 5 0.000311
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.036743 5 0.000657
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.090488 5 0.000126
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.163603 4 0.000063
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001112 1 0.000082
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.050562 2 0.000067
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.215259 4 0.000056
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001152 1 0.000034
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052636 2 0.000101
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.269105 4 0.000057
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000888 1 0.000038
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.093897 2 0.000079
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.364013 4 0.000047
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.003231 1 0.000068
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 62 ms_handle_reset con 0x55be68510c00 session 0x55be68b450e0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.037372 2 0.000059
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.404758 4 0.000044
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000809 1 0.000052
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.089422 2 0.000066
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.338104 1 0.000060
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000715 1 0.000116
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.074086 2 0.000111
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.413075 1 0.000081
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000606 1 0.000201
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.087067 2 0.000220
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.501002 1 0.000222
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000838 1 0.000078
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.546848 1 0.000026
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044995 2 0.000216
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000563 1 0.000133
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.036974 2 0.000068
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 62 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 62 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 62 handle_osd_map epochs [62,63], i have 63, src has [1,63]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 62 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.037594 1 0.000111
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active 1.324263 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary 2.643944 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.602447 1 0.000376
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started 2.643972 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] exit Started/Primary/Active 1.186060 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] exit Started/Primary 1.324151 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] exit Started 1.324191 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'449 lcod 54'448 mlcod 54'448 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692562103s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 152.356018066s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849905014s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 54'448 active pruub 152.513366699s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.143467 1 0.000115
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active 1.325018 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary 2.644500 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started 2.644550 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] exit Reset 0.000131 1 0.000191
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] exit Reset 0.000111 1 0.000171
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692481995s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.356018066s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849831581s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY pruub 152.513366699s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692319870s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 152.355911255s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] exit Reset 0.000154 1 0.000246
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.186203 6 0.000192
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.736939 1 0.000232
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] exit Started/Primary/Active 1.186810 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] exit Started/Primary 1.323939 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] exit Started 1.324102 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'453 lcod 54'452 mlcod 54'452 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.091978 1 0.000096
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] exit Started/Primary/Active 1.325102 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] exit Started/Primary 2.644627 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849797249s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 54'452 active pruub 152.513580322s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] exit Started 2.644664 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=53'443 lcod 53'442 mlcod 53'442 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692113876s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 53'442 active pruub 152.355926514s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.648976 1 0.000286
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] exit Started/Primary/Active 1.186838 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] exit Started/Primary 1.324843 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] exit Started 1.324871 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] exit Reset 0.000084 1 0.000110
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'465 lcod 54'464 mlcod 54'464 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] exit Reset 0.000048 1 0.000069
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849735260s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY pruub 152.513580322s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692079544s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY pruub 152.355926514s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849512100s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 54'464 active pruub 152.513397217s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.812088 1 0.000170
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active 1.325020 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 54'445 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.235731 4 0.000163
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 54'445 active+remapped mbc={255={}}] exit Started/Primary/Active 1.324757 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 54'445 active+remapped mbc={255={}}] exit Started/Primary 2.644433 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 54'445 active+remapped mbc={255={}}] exit Started 2.644467 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'446 lcod 54'445 mlcod 54'445 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.943310 1 0.000088
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active 1.324269 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary 2.644918 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started 2.644983 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] exit Reset 0.000169 1 0.000211
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691993713s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 152.356033325s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691857338s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 54'445 active pruub 152.355850220s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849368095s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY pruub 152.513397217s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary 2.644489 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] exit Reset 0.000085 1 0.000184
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] exit Reset 0.000142 1 0.000163
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started 2.644665 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] exit Start 0.000033 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691733360s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY pruub 152.355850220s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691860199s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 152.356033325s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.902731 1 0.000144
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] exit Started/Primary/Active 1.324442 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] exit Started/Primary 2.644426 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.565771 1 0.000150
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary/Active 1.187220 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] exit Started 2.644566 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[51,60)/1 crt=54'442 lcod 54'441 mlcod 54'441 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started/Primary 1.324157 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] exit Started 1.324186 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] exit Reset 0.000117 1 0.000284
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[51,61)/1 crt=54'444 lcod 54'443 mlcod 54'443 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849142075s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 54'443 active pruub 152.513412476s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691866875s) [2] async=[2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 54'441 active pruub 152.356140137s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691775322s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] exit Reset 0.000048 1 0.000082
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] exit Reset 0.000072 1 0.000125
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63 pruub=14.849112511s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY pruub 152.513412476s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691829681s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY pruub 152.356140137s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] exit Start 0.000036 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.691934586s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 152.356033325s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] exit Start 0.000014 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63 pruub=14.692227364s) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 152.355911255s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001499 2 0.000027
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] lb MIN local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.296943 2 0.000191
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] lb MIN local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.298507 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 63 pg[9.1b( v 53'438 (0'0,53'438] lb MIN local-lis/les=60/61 n=3 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=62) [2] r=-1 lpr=62 pi=[51,62)/1 crt=53'438 lcod 53'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.484782 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 63 heartbeat osd_stat(store_statfs(0x1bcb4d000/0x0/0x1bfc00000, data 0x54095/0xd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 909312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.965947 6 0.000135
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.966204 6 0.000109
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.965944 6 0.000066
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001048 2 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001461 2 0.000079
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001801 2 0.000153
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.970990 7 0.000096
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.970853 7 0.000066
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.971311 7 0.000088
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000041 1 0.000034
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.970595 7 0.000971
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.970806 7 0.000425
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.971709 7 0.000112
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.971400 7 0.000097
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000100 1 0.000042
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000135 1 0.000015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.971864 7 0.000095
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000149 1 0.000055
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000254 1 0.000132
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000269 1 0.000063
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000409 1 0.000217
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000347 1 0.000047
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] lb MIN local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.173500 2 0.000244
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] lb MIN local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174614 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.173112 2 0.000108
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.d( v 54'465 (0'0,54'465] lb MIN local-lis/les=61/62 n=9 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'465 lcod 54'464 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.140880 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174621 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.15( v 54'444 (0'0,54'444] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.140624 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234441 2 0.000162
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.236336 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.3( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.202378 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] lb MIN local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.269627 2 0.000225
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] lb MIN local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269734 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.b( v 54'446 (0'0,54'446] lb MIN local-lis/les=60/61 n=5 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'446 lcod 54'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.240789 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.291960 2 0.000165
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.292111 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.17( v 54'442 (0'0,54'442] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'442 lcod 54'441 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.263005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] lb MIN local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.427429 2 0.000162
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] lb MIN local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.427622 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.f( v 54'463 (0'0,54'463] lb MIN local-lis/les=60/61 n=7 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.398765 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.427426 2 0.000128
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.427618 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.13( v 54'444 (0'0,54'444] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'444 lcod 54'443 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.399399 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] lb MIN local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.427514 2 0.000210
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] lb MIN local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.427846 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.5( v 54'453 (0'0,54'453] lb MIN local-lis/les=61/62 n=7 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'453 lcod 54'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.399234 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 64 ms_handle_reset con 0x55be67ed1c00 session 0x55be68b19680
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.485866 2 0.000122
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.486187 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.7( v 53'443 (0'0,53'443] lb MIN local-lis/les=60/61 n=4 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=53'443 lcod 53'442 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.457653 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.533256 2 0.000180
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.533690 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=61/62 n=4 ec=51/44 lis/c=61/51 les/c/f=62/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'449 lcod 54'448 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.505605 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.533477 2 0.000210
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.533968 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 64 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=60/61 n=6 ec=51/44 lis/c=60/51 les/c/f=61/52/0 sis=63) [2] r=-1 lpr=63 pi=[51,63)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.505532 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 1916928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687413 data_alloc: 218103808 data_used: 180224
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1875968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.173743248s of 10.812228203s, submitted: 139
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1638400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 64 heartbeat osd_stat(store_statfs(0x1bcb59000/0x0/0x1bfc00000, data 0x55733/0xc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 64 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 64 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'438 lcod 52'437 mlcod 52'437 active+clean] exit Started/Primary/Active/Clean 22.085181 39 0.000159
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'438 lcod 52'437 mlcod 52'437 active mbc={}] exit Started/Primary/Active 22.099466 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'438 lcod 52'437 mlcod 52'437 active mbc={}] exit Started/Primary 22.099726 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'438 lcod 52'437 mlcod 52'437 active mbc={}] exit Started 22.099767 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'438 lcod 52'437 mlcod 52'437 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.914024353s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 52'437 active pruub 154.341018677s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] exit Reset 0.000198 1 0.000297
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] exit Start 0.000020 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913900375s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 154.341018677s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'450 lcod 54'449 mlcod 54'449 active+clean] exit Started/Primary/Active/Clean 22.085309 39 0.000142
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'450 lcod 54'449 mlcod 54'449 active mbc={}] exit Started/Primary/Active 22.099307 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'450 lcod 54'449 mlcod 54'449 active mbc={}] exit Started/Primary 22.099365 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'450 lcod 54'449 mlcod 54'449 active mbc={}] exit Started 22.099407 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'450 lcod 54'449 mlcod 54'449 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913933754s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 54'449 active pruub 154.341217041s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] exit Reset 0.000128 1 0.000184
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913849831s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 154.341217041s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'453 lcod 53'452 mlcod 53'452 active+clean] exit Started/Primary/Active/Clean 22.085557 39 0.000177
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'453 lcod 53'452 mlcod 53'452 active mbc={}] exit Started/Primary/Active 22.097519 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'453 lcod 53'452 mlcod 53'452 active mbc={}] exit Started/Primary 22.098137 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'453 lcod 53'452 mlcod 53'452 active mbc={}] exit Started 22.098288 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'453 lcod 53'452 mlcod 53'452 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913835526s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 53'452 active pruub 154.341857910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] exit Reset 0.000086 1 0.000153
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.913772583s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 154.341857910s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active+clean] exit Started/Primary/Active/Clean 22.078811 39 0.000133
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started/Primary/Active 22.096034 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started/Primary 22.096856 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started 22.096894 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921279907s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 154.349487305s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] exit Reset 0.000073 1 0.000113
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 65 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65 pruub=9.921225548s) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 154.349487305s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.672778 3 0.000101
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.672860 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] exit Reset 0.000140 1 0.000194
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.672753 3 0.000067
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.672813 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] exit Reset 0.000132 1 0.000178
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] exit Start 0.000018 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] exit Start 0.000570 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.673218 3 0.000061
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.673262 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] exit Reset 0.000045 1 0.000069
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.673366 3 0.000079
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.673400 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=-1 lpr=65 pi=[51,65)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] exit Reset 0.000060 1 0.000078
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.007061 2 0.000678
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000052 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.007540 2 0.000077
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.006238 2 0.000035
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.006628 2 0.000114
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000120 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000059 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 66 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695621 data_alloc: 218103808 data_used: 188416
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 66 handle_osd_map epochs [66,67], i have 67, src has [1,67]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.971201 3 0.000152
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.978412 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.971090 3 0.000309
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.978908 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.971547 3 0.000174
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.978335 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.971891 3 0.000080
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.978223 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.360961 5 0.000384
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.360585 5 0.000386
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.361144 5 0.000418
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000105 1 0.000058
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.360479 5 0.000535
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000651 1 0.000025
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044426 2 0.000058
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.045240 1 0.000037
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000920 1 0.000037
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061722 2 0.000107
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.107990 1 0.000084
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001085 1 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.076678 2 0.000091
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.185759 1 0.000031
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000817 1 0.000040
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.053897 2 0.000087
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.495981 1 0.000091
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] exit Started/Primary/Active 1.043172 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] exit Started/Primary 2.021609 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] exit Started 2.022235 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=52'438 lcod 52'437 mlcod 52'437 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317691803s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 52'437 active pruub 162.440124512s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.636832 1 0.000132
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active 1.043223 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary 2.022152 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started 2.022192 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'450 lcod 54'449 mlcod 54'449 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317581177s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 54'449 active pruub 162.440155029s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] exit Reset 0.000069 1 0.000101
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317535400s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY pruub 162.440155029s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] exit Reset 0.000248 1 0.000273
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317538261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY pruub 162.440124512s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.574641 1 0.000130
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] exit Started/Primary/Active 1.043340 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] exit Started/Primary 2.021693 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] exit Started 2.021712 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=53'453 lcod 53'452 mlcod 53'452 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.317067146s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 53'452 active pruub 162.440216064s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.442006 1 0.000144
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active 1.043211 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary 2.021464 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started 2.021484 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] async=[1] r=0 lpr=66 pi=[51,66)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316988945s) [1] async=[1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 162.440231323s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] exit Reset 0.000153 1 0.000188
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316941261s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY pruub 162.440216064s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] exit Reset 0.000070 1 0.000104
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68 pruub=15.316942215s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.440231323s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.379738 6 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.380450 6 0.000111
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.379769 6 0.000133
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.380399 6 0.000269
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000630 2 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001013 2 0.000123
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001097 2 0.000087
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000468 2 0.000794
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1646592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] lb MIN local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 DELETING pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075867 2 0.000213
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] lb MIN local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076554 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] lb MIN local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'450 lcod 54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.457053 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 DELETING pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112561 2 0.000127
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.113694 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.493495 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 DELETING pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.142056 2 0.000199
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.143223 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=52'438 lcod 52'437 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.523670 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 DELETING pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.186515 2 0.000212
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.187759 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] lb MIN local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=-1 lpr=68 pi=[51,68)/1 crt=53'453 lcod 53'452 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.567619 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 69 heartbeat osd_stat(store_statfs(0x1bcb4b000/0x0/0x1bfc00000, data 0x5cc13/0xd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667780 data_alloc: 218103808 data_used: 180224
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 69 heartbeat osd_stat(store_statfs(0x1bcb4c000/0x0/0x1bfc00000, data 0x5e768/0xd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 69 heartbeat osd_stat(store_statfs(0x1bcb4c000/0x0/0x1bfc00000, data 0x5e768/0xd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active+clean] exit Started/Primary/Active/Clean 30.023444 55 0.000202
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started/Primary/Active 30.037126 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started/Primary 30.037216 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] exit Started 30.037265 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'458 lcod 54'457 mlcod 54'457 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975888252s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 162.341400146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] exit Reset 0.000131 1 0.000191
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] exit Start 0.000020 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975801468s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 162.341400146s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'440 lcod 52'439 mlcod 52'439 active+clean] exit Started/Primary/Active/Clean 30.023864 55 0.000195
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'440 lcod 52'439 mlcod 52'439 active mbc={}] exit Started/Primary/Active 30.035483 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'440 lcod 52'439 mlcod 52'439 active mbc={}] exit Started/Primary 30.035591 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'440 lcod 52'439 mlcod 52'439 active mbc={}] exit Started 30.035617 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'440 lcod 52'439 mlcod 52'439 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975490570s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 52'439 active pruub 162.341995239s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] exit Reset 0.000125 1 0.000189
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 70 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70 pruub=9.975412369s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 162.341995239s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1605632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431603432s of 10.969909668s, submitted: 64
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.291778 3 0.000088
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.291824 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.292772 3 0.000075
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.292835 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=70) [2] r=-1 lpr=70 pi=[51,70)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] exit Reset 0.000129 1 0.000160
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] exit Reset 0.000114 1 0.000151
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000050
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000537
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 71 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 71 heartbeat osd_stat(store_statfs(0x1bcb48000/0x0/0x1bfc00000, data 0x604e3/0xd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'445 lcod 52'444 mlcod 52'444 active+clean] exit Started/Primary/Active/Clean 32.048891 62 0.000330
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'445 lcod 52'444 mlcod 52'444 active mbc={}] exit Started/Primary/Active 32.062809 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.731728 4 0.000072
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'445 lcod 52'444 mlcod 52'444 active mbc={}] exit Started/Primary 32.062892 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.731840 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=51/52 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'445 lcod 52'444 mlcod 52'444 active mbc={}] exit Started 32.062949 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'445 lcod 52'444 mlcod 52'444 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 72 handle_osd_map epochs [71,72], i have 72, src has [1,72]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950375557s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 52'444 active pruub 170.341415405s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] exit Reset 0.000135 1 0.000248
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] exit Start 0.000014 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.950277328s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 170.341415405s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.732713 4 0.000077
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.732826 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'463 lcod 54'462 mlcod 54'462 active+clean] exit Started/Primary/Active/Clean 32.042395 62 0.000202
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'463 lcod 54'462 mlcod 54'462 active mbc={}] exit Started/Primary/Active 32.060208 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'463 lcod 54'462 mlcod 54'462 active mbc={}] exit Started/Primary 32.060375 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'463 lcod 54'462 mlcod 54'462 active mbc={}] exit Started 32.060430 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'463 lcod 54'462 mlcod 54'462 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957394600s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 170.348876953s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] exit Reset 0.000063 1 0.000103
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72 pruub=15.957352638s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 170.348876953s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1540096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 72 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.206990 5 0.000450
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000144 1 0.000213
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.206882 5 0.000475
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000735 1 0.000040
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.064312 2 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.065058 1 0.000122
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001048 1 0.000033
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.283254 2 0.000106
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 72 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013043 3 0.000084
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013096 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.740805 1 0.000164
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013405 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started/Primary 1.745270 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] exit Started 1.745783 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=54'458 lcod 54'457 mlcod 54'457 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] exit Reset 0.000112 1 0.000149
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193369865s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 54'457 active pruub 170.597778320s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] exit Reset 0.000126 1 0.000203
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.193284035s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY pruub 170.597778320s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'466 lcod 54'465 mlcod 54'465 active+clean] exit Started/Primary/Active/Clean 33.062457 65 0.000231
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'466 lcod 54'465 mlcod 54'465 active mbc={}] exit Started/Primary/Active 33.075279 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'466 lcod 54'465 mlcod 54'465 active mbc={}] exit Started/Primary 33.075362 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'466 lcod 54'465 mlcod 54'465 active mbc={}] exit Started 33.075423 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'466 lcod 54'465 mlcod 54'465 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936949730s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 54'465 active pruub 170.341690063s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] exit Reset 0.000127 1 0.000184
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936874390s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 170.341690063s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'445 lcod 53'444 mlcod 53'444 active+clean] exit Started/Primary/Active/Clean 33.062459 65 0.000202
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'445 lcod 53'444 mlcod 53'444 active mbc={}] exit Started/Primary/Active 33.074688 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'445 lcod 53'444 mlcod 53'444 active mbc={}] exit Started/Primary 33.074750 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'445 lcod 53'444 mlcod 53'444 active mbc={}] exit Started 33.074790 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'445 lcod 53'444 mlcod 53'444 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.457206 1 0.000165
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936934471s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 53'444 active pruub 170.342041016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013705 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] exit Started/Primary 1.746551 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] exit Started 1.746579 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013586 3 0.000081
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013620 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[51,71)/1 crt=52'440 lcod 52'439 mlcod 52'439 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=72) [2] r=-1 lpr=72 pi=[51,72)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192858696s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 52'439 active pruub 170.598068237s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] exit Reset 0.000134 1 0.000173
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] exit Reset 0.000243 1 0.000303
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73 pruub=15.192792892s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY pruub 170.598068237s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] exit Reset 0.000185 1 0.000205
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73 pruub=14.936720848s) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 170.342041016s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] exit Start 0.000098 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002105 2 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 73 handle_osd_map epochs [72,73], i have 73, src has [1,73]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002130 2 0.000133
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 73 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683642 data_alloc: 218103808 data_used: 200704
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1449984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 73 heartbeat osd_stat(store_statfs(0x1bcb3f000/0x0/0x1bfc00000, data 0x65d46/0xde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 73 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002366 3 0.000113
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004610 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004579 3 0.000078
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.004634 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] exit Reset 0.000088 1 0.000117
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004480 3 0.000054
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.004511 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=-1 lpr=73 pi=[51,73)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] exit Reset 0.000036 1 0.000049
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003114 3 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005364 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=51/52 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.012103 2 0.000049
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.011856 2 0.000035
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000104 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000060 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.027585 5 0.000488
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000123 1 0.000115
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.029598 5 0.000632
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000522 1 0.000044
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036068 7 0.000118
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037801 7 0.000274
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061356 2 0.000068
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.061811 1 0.000151
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000927 1 0.000155
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.047285 2 0.000115
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.108444 1 0.000066
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.106023 1 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1359872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] lb MIN local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 DELETING pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067239 2 0.000269
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] lb MIN local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.175747 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.8( v 54'458 (0'0,54'458] lb MIN local-lis/les=71/72 n=8 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=54'458 lcod 54'457 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.211880 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] lb MIN local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 DELETING pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096378 2 0.000226
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] lb MIN local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.202481 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15135 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 74 pg[9.18( v 52'440 (0'0,52'440] lb MIN local-lis/les=71/72 n=3 ec=51/44 lis/c=71/51 les/c/f=72/52/0 sis=73) [2] r=-1 lpr=73 pi=[51,73)/1 crt=52'440 lcod 52'439 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.240375 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.103706 3 0.000151
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.115728 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.103734 3 0.000250
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.116078 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=51/52 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=51/52 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.977009 1 0.000256
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] exit Started/Primary/Active 1.117142 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] exit Started/Primary 2.121828 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] exit Started 2.121885 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=52'445 lcod 52'444 mlcod 52'444 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911890984s) [2] async=[2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 52'444 active pruub 172.438339233s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.025970 1 0.000188
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary/Active 1.115901 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started/Primary 2.121299 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] exit Started 2.121416 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[51,73)/1 crt=54'463 lcod 54'462 mlcod 54'462 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911568642s) [2] async=[2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 54'462 active pruub 172.438385010s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 75 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] exit Reset 0.000146 1 0.000213
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911472321s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY pruub 172.438385010s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] exit Reset 0.000849 1 0.001022
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] exit Start 0.000092 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75 pruub=14.911182404s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY pruub 172.438339233s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.010424 5 0.000479
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000154 1 0.000075
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000751 1 0.000122
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.018349 5 0.000440
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1335296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.037832 2 0.000049
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.030918 1 0.000081
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001015 1 0.000132
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.088417 2 0.000123
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.963339 1 0.000159
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] exit Started/Primary/Active 1.102585 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] exit Started/Primary 2.218685 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] exit Started 2.218722 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=54'466 lcod 54'465 mlcod 54'465 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.053184 1 0.000148
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] exit Started/Primary/Active 1.102724 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] exit Started/Primary 2.218482 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915776253s) [1] async=[1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 54'465 active pruub 173.544143677s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] exit Started 2.218505 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[51,74)/1 crt=53'445 lcod 53'444 mlcod 53'444 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907569885s) [1] async=[1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 53'444 active pruub 173.536026001s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] exit Reset 0.000105 1 0.000152
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.907505035s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY pruub 173.536026001s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] exit Reset 0.000255 1 0.000379
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76 pruub=14.915587425s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY pruub 173.544143677s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.112473 7 0.000355
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000121 1 0.000063
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.116054 7 0.000196
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000127 1 0.000068
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] lb MIN local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.065950 2 0.000387
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] lb MIN local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066261 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.9( v 52'445 (0'0,52'445] lb MIN local-lis/les=73/74 n=5 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=52'445 lcod 52'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.178899 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] lb MIN local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.114653 2 0.000266
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] lb MIN local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.114849 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 76 pg[9.19( v 54'463 (0'0,54'463] lb MIN local-lis/les=73/74 n=7 ec=51/44 lis/c=73/51 les/c/f=74/52/0 sis=75) [2] r=-1 lpr=75 pi=[51,75)/1 crt=54'463 lcod 54'462 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.230955 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1269760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 76 heartbeat osd_stat(store_statfs(0x1bcb3b000/0x0/0x1bfc00000, data 0x6b0e7/0xe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 76 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.704105 6 0.000108
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.704212 6 0.000333
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000768 1 0.000082
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000991 2 0.000051
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 DELETING pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045874 3 0.000221
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046687 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=53'445 lcod 53'444 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.750848 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] lb MIN local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 DELETING pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112229 2 0.000222
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] lb MIN local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.113271 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] lb MIN local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=-1 lpr=76 pi=[51,76)/1 crt=54'466 lcod 54'465 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.817545 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637481 data_alloc: 218103808 data_used: 180224
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 78 heartbeat osd_stat(store_statfs(0x1bcb37000/0x0/0x1bfc00000, data 0x6e9b7/0xe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.404607773s of 10.820051193s, submitted: 111
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 1294336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 78 heartbeat osd_stat(store_statfs(0x1bcb38000/0x0/0x1bfc00000, data 0x6e9b7/0xe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 642665 data_alloc: 218103808 data_used: 188416
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 1261568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1253376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 1245184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1236992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 80 handle_osd_map epochs [81,82], i have 80, src has [1,82]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1228800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 657625 data_alloc: 218103808 data_used: 196608
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1187840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 83 heartbeat osd_stat(store_statfs(0x1bcb29000/0x0/0x1bfc00000, data 0x75ef7/0xf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1138688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 1130496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 84 heartbeat osd_stat(store_statfs(0x1bcb24000/0x0/0x1bfc00000, data 0x798d5/0xf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 84 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'436 lcod 52'435 mlcod 52'435 active+clean] exit Started/Primary/Active/Clean 50.846342 101 0.000793
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'436 lcod 52'435 mlcod 52'435 active mbc={}] exit Started/Primary/Active 50.852843 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'436 lcod 52'435 mlcod 52'435 active mbc={}] exit Started/Primary 50.853006 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'436 lcod 52'435 mlcod 52'435 active mbc={}] exit Started 50.853227 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=52'436 lcod 52'435 mlcod 52'435 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153639793s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 52'435 active pruub 186.333679199s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] exit Reset 0.000181 1 0.001543
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] exit Start 0.000383 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 85 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85 pruub=13.153533936s) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 186.333679199s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1097728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.749959 3 0.000520
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.750445 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=-1 lpr=85 pi=[51,85)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] exit Reset 0.000161 1 0.000295
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.011396 2 0.000076
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000074 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 86 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1089536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.130466461s of 10.475877762s, submitted: 30
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011811 3 0.000178
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.023384 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=51/52 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'454 lcod 54'453 mlcod 54'453 active+clean] exit Started/Primary/Active/Clean 52.613018 107 0.000337
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'454 lcod 54'453 mlcod 54'453 active mbc={}] exit Started/Primary/Active 52.627047 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'454 lcod 54'453 mlcod 54'453 active mbc={}] exit Started/Primary 52.627204 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'454 lcod 54'453 mlcod 54'453 active mbc={}] exit Started 52.627260 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=54'454 lcod 54'453 mlcod 54'453 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387310028s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 186.341903687s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] exit Reset 0.000156 1 0.000246
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] exit Start 0.000025 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87 pruub=11.387218475s) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 186.341903687s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 87 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.028043 5 0.000299
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000113 1 0.000121
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000722 1 0.000087
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.026358 2 0.000072
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 87 heartbeat osd_stat(store_statfs(0x1bcb1e000/0x0/0x1bfc00000, data 0x7d359/0xfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675721 data_alloc: 218103808 data_used: 204800
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1048576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 87 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.080046 3 0.000103
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.025165 1 0.000116
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] exit Started/Primary/Active 1.080741 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] exit Started/Primary 2.104170 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.080392 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=-1 lpr=87 pi=[51,87)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] exit Started 2.104417 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[51,86)/1 crt=52'436 lcod 52'435 mlcod 52'435 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946665764s) [1] async=[1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 52'435 active pruub 190.982055664s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] exit Reset 0.000191 1 0.000628
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] exit Start 0.000160 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88 pruub=14.946530342s) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY pruub 190.982055664s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] exit Reset 0.000739 1 0.001080
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] exit Start 0.000042 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000154
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000083 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000090 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 88 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1064960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973778 4 0.000355
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.974219 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=51/52 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'447 lcod 53'446 mlcod 53'446 active+clean] exit Started/Primary/Active/Clean 54.661522 113 0.000316
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'447 lcod 53'446 mlcod 53'446 active mbc={}] exit Started/Primary/Active 54.678789 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'447 lcod 53'446 mlcod 53'446 active mbc={}] exit Started/Primary 54.678845 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'447 lcod 53'446 mlcod 53'446 active mbc={}] exit Started 54.678900 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=51) [0] r=0 lpr=51 crt=53'447 lcod 53'446 mlcod 53'446 active mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338972092s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 53'446 active pruub 186.349975586s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] exit Reset 0.000359 1 0.000303
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] exit Start 0.000039 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89 pruub=9.338845253s) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 186.349975586s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.011037 5 0.000308
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000139 1 0.000071
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001331 1 0.000090
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.994542 7 0.000292
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043921 2 0.000088
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.036543 1 0.000046
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] lb MIN local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 DELETING pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022358 2 0.000221
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] lb MIN local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.058966 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] lb MIN local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=-1 lpr=88 pi=[51,88)/1 crt=52'436 lcod 52'435 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.053721 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.162486 3 0.000319
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.107097 1 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.162591 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary/Active 1.163796 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=-1 lpr=89 pi=[51,89)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started/Primary 2.138051 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] exit Started 2.138131 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[51,88)/1 crt=54'454 lcod 54'453 mlcod 54'453 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.847044945s) [1] async=[1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 54'453 active pruub 193.021102905s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] exit Reset 0.000118 1 0.000189
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] exit Reset 0.000141 1 0.000229
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90 pruub=14.846958160s) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY pruub 193.021102905s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000067
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 90 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 90 heartbeat osd_stat(store_statfs(0x1bcb11000/0x0/0x1bfc00000, data 0x847cb/0x10a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 90 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990892 4 0.000099
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.991068 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=51/52 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.007906 5 0.000527
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000146 1 0.000117
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000788 1 0.000153
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003288 7 0.000179
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039216 2 0.000130
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.036400 1 0.000128
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] lb MIN local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 DELETING pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044508 2 0.000259
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] lb MIN local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.080979 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] lb MIN local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=-1 lpr=90 pi=[51,90)/1 crt=54'454 lcod 54'453 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.084391 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.956855 1 0.000201
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005441 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] exit Started/Primary 1.996539 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] exit Started 1.996573 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[51,90)/1 crt=53'447 lcod 53'446 mlcod 53'446 active+remapped mbc={255={}}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002256393s) [1] async=[1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 53'446 active pruub 195.172988892s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] exit Reset 0.000138 1 0.000204
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] exit Start 0.000467 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92 pruub=15.002179146s) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY pruub 195.172988892s@ mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677439 data_alloc: 218103808 data_used: 200704
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022788 7 0.000599
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000127 1 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] lb MIN local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 DELETING pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033948 2 0.000296
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] lb MIN local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034162 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] lb MIN local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=-1 lpr=92 pi=[51,92)/1 crt=53'447 lcod 53'446 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.057494 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 93 heartbeat osd_stat(store_statfs(0x1bcb0a000/0x0/0x1bfc00000, data 0x89e5c/0x111000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 94 heartbeat osd_stat(store_statfs(0x1bcb0d000/0x0/0x1bfc00000, data 0x89e5c/0x111000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.657588005s of 10.085265160s, submitted: 50
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679443 data_alloc: 218103808 data_used: 204800
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 95 handle_osd_map epochs [96,97], i have 95, src has [1,97]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 811008 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 98 heartbeat osd_stat(store_statfs(0x1bcafb000/0x0/0x1bfc00000, data 0x92eb5/0x120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 753664 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 745472 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693945 data_alloc: 218103808 data_used: 212992
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 729088 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 729088 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 99 heartbeat osd_stat(store_statfs(0x1bcafa000/0x0/0x1bfc00000, data 0x94ae6/0x123000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78946304 unmapped: 688128 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 99 heartbeat osd_stat(store_statfs(0x1bcafa000/0x0/0x1bfc00000, data 0x94ae6/0x123000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78946304 unmapped: 688128 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 679936 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.429924011s of 10.556100845s, submitted: 39
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 702507 data_alloc: 218103808 data_used: 221184
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78962688 unmapped: 671744 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 663552 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 101 handle_osd_map epochs [101,102], i have 102, src has [1,102]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19(unlocked)] enter Initial
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=0 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=0 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000041
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000180 1 0.000061
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000234 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 663552 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 102 handle_osd_map epochs [102,103], i have 103, src has [1,103]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.955569 2 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.955876 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.955934 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000353 1 0.000479
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000158 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 103 heartbeat osd_stat(store_statfs(0x1bcaf1000/0x0/0x1bfc00000, data 0x9a357/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 655360 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 647168 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a(unlocked)] enter Initial
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=0 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=0 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000055
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000371 1 0.000081
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000054 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000504 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.721188 5 0.000402
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 104 heartbeat osd_stat(store_statfs(0x1bcaed000/0x0/0x1bfc00000, data 0x9bfba/0x12f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 50'185 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.256074 4 0.000192
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 50'185 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 50'185 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000171 1 0.000094
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 lc 50'185 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.090393 1 0.000040
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 104 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 727354 data_alloc: 218103808 data_used: 221184
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 1654784 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 104 handle_osd_map epochs [104,105], i have 105, src has [1,105]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b(unlocked)] enter Initial
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=0 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=0 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000042
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000240 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000316
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000238 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.703099 1 0.000063
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.049878 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started 2.771309 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=54'463 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] exit Reset 0.000088 1 0.000123
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000046
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.050621 2 0.000165
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.051154 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.051187 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=0 lpr=104 pi=[76,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000038 1 0.000060
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=42
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=42
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000811 3 0.000041
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 105 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 heartbeat osd_stat(store_statfs(0x1bcaea000/0x0/0x1bfc00000, data 0x9ddc4/0x133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.606949 2 0.000094
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.607229 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.607501 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=105) [0] r=0 lpr=105 pi=[62,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000082 1 0.000122
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.605950 2 0.000073
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.606842 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=103/104 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000101 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002310 3 0.000142
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.19( v 54'463 (0'0,54'463] local-lis/les=105/106 n=7 ec=51/44 lis/c=105/75 les/c/f=106/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=54'463 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 0.610057 6 0.000079
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006953 3 0.000117
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000050 1 0.000056
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.030794 1 0.000059
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 1671168 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.136500 1 0.000080
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.174455 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started 1.784546 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[76,105)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 luod=0'0 crt=53'445 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] exit Reset 0.000105 1 0.000162
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 1671168 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 0'0 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=53'438 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.467575 6 0.000177
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 0'0 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=53'438 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 0'0 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=62/62 les/c/f=63/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 crt=53'438 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.290582 2 0.000055
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001841 2 0.000108
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 52'432 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.072482 3 0.000199
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 52'432 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 52'432 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000126 1 0.000082
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 lc 52'432 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.019158 1 0.000088
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 107 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 107 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.731126 2 0.000078
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.023627 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.642384 1 0.000072
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.734321 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] exit Started 2.202037 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[62,106)/1 luod=0'0 crt=53'438 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 luod=0'0 crt=53'438 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] exit Reset 0.000088 1 0.000135
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=107/76 les/c/f=108/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003983 4 0.000341
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=107/76 les/c/f=108/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=107/76 les/c/f=108/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=107/108 n=4 ec=51/44 lis/c=107/76 les/c/f=108/77/0 sis=107) [0] r=0 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004779 2 0.000040
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=14
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=14
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001477 2 0.000076
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 108 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 1638400 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.063603 2 0.000048
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.069921 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=106/107 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 1638400 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=106/62 les/c/f=107/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=108/62 les/c/f=109/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.041769 3 0.000225
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=108/62 les/c/f=109/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=108/62 les/c/f=109/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 109 pg[9.1b( v 53'438 (0'0,53'438] local-lis/les=108/109 n=3 ec=51/44 lis/c=108/62 les/c/f=109/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=53'438 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 756223 data_alloc: 218103808 data_used: 229376
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 1630208 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.503093719s of 11.014726639s, submitted: 65
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 1613824 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 109 heartbeat osd_stat(store_statfs(0x1bcad8000/0x0/0x1bfc00000, data 0xa6ddc/0x144000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 1605632 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1597440 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 110 heartbeat osd_stat(store_statfs(0x1bcad6000/0x0/0x1bfc00000, data 0xa8b57/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 111 heartbeat osd_stat(store_statfs(0x1bcad6000/0x0/0x1bfc00000, data 0xa8b57/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1581056 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 767585 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1581056 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 1572864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e(unlocked)] enter Initial
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=0 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=0 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000194 1 0.000052
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000284 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 1564672 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.264268 2 0.000125
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.264724 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.264781 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000242 1 0.000449
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000042 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f(unlocked)] enter Initial
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=0 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=0 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000055
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000210 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 1564672 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.936103 2 0.000068
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.936351 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.936380 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=0 lpr=114 pi=[87,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000082 1 0.000128
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.051683 6 0.000155
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007749 3 0.000264
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000087 1 0.000056
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038125 1 0.000034
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 1662976 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 115 heartbeat osd_stat(store_statfs(0x1bcac5000/0x0/0x1bfc00000, data 0xb1d2f/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.081009 1 0.000053
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.127301 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started 2.179130 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] r=-1 lpr=114 pi=[68,114)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.177225 5 0.000118
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 luod=0'0 crt=54'458 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] exit Reset 0.000338 1 0.000666
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] exit Start 0.000041 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000132
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001335 3 0.000236
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070368 4 0.000425
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000114 1 0.000136
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.041441 1 0.000055
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799291 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1589248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 116 handle_osd_map epochs [116,117], i have 117, src has [1,117]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.901232 2 0.000098
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.902705 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.790979 1 0.000087
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.903115 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started 2.080513 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] r=-1 lpr=115 pi=[87,115)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Reset 0.000279 1 0.000352
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=116/68 les/c/f=117/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006754 3 0.000166
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=116/68 les/c/f=117/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=116/68 les/c/f=117/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=116/117 n=7 ec=51/44 lis/c=116/68 les/c/f=117/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.015741 2 0.000074
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: merge_log_dups log.dups.size()=0olog.dups.size()=30
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=30
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001558 2 0.000112
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1589248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.498353004s of 10.939938545s, submitted: 52
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.202921 2 0.000064
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.220315 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=117/87 les/c/f=118/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.025670 3 0.000239
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=117/87 les/c/f=118/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=117/87 les/c/f=118/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=117/118 n=6 ec=51/44 lis/c=117/87 les/c/f=118/88/0 sis=117) [0] r=0 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1589248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1581056 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 1572864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805330 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 1572864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 1572864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 1564672 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 1564672 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 1556480 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807624 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 1556480 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 1548288 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 1548288 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.038843155s of 11.169083595s, submitted: 19
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 1523712 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 1523712 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809920 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 1515520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 1515520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 1515520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 1507328 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 1507328 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812216 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 1499136 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 1490944 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 1490944 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 1482752 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.860527992s of 10.943672180s, submitted: 12
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 1482752 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815657 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 1482752 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79208448 unmapped: 1474560 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79208448 unmapped: 1474560 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 1466368 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 1466368 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816804 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 1458176 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 1458176 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 1458176 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 1441792 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.000469208s of 10.025480270s, submitted: 6
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 1433600 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819100 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 1425408 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 1425408 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 1425408 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 1417216 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 1417216 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822544 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 1400832 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 1400832 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 1400832 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 1392640 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 1392640 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822544 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 1384448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 1384448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 1384448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.953351974s of 13.991852760s, submitted: 8
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 1376256 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 1368064 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823692 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 1368064 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 1359872 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 1359872 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 1351680 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 1351680 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824840 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 1343488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 1343488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 1343488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 1335296 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 1335296 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.874423027s of 11.906815529s, submitted: 6
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827137 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 1335296 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 1327104 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 1327104 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 1318912 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 1318912 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828286 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 1310720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 1310720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 1310720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 1302528 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 1302528 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.036384583s of 10.069998741s, submitted: 10
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832882 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 245760 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 237568 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1277952 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1269760 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1269760 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835180 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 1261568 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 1261568 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1245184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1245184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1245184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835180 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1236992 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 1236992 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.041921616s of 12.065209389s, submitted: 6
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1228800 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1220608 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 1220608 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838624 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1212416 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1204224 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1204224 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1196032 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1196032 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839773 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1179648 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 1171456 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1155072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1155072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1155072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840922 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1146880 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1146880 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1146880 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1138688 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 1138688 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840922 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1130496 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.978126526s of 19.034505844s, submitted: 10
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1130496 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1122304 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1122304 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1122304 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842071 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1105920 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1105920 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1097728 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1097728 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1097728 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844367 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1089536 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1089536 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.869263649s of 10.892516136s, submitted: 6
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1089536 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1073152 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 1073152 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845515 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1064960 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1056768 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1048576 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1048576 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1040384 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1040384 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1040384 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1032192 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1032192 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1024000 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1024000 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1024000 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1015808 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1015808 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1015808 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1007616 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 999424 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 991232 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 991232 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 983040 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 983040 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 974848 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 966656 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 966656 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 958464 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 958464 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 958464 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 950272 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 950272 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 950272 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 942080 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 942080 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 933888 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 933888 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 933888 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 909312 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 909312 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 901120 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 901120 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 892928 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 892928 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 892928 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 884736 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 884736 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 884736 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 876544 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 876544 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 860160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 860160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 860160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 851968 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 851968 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 843776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 843776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 835584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 835584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 835584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 827392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 827392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 819200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 819200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 819200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 811008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 811008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 811008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 802816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 802816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 794624 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 794624 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 786432 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 786432 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 778240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 778240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 778240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 770048 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 770048 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 761856 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 761856 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 761856 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 753664 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 745472 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 737280 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 737280 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 729088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 729088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 729088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 729088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 720896 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 720896 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 712704 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 712704 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 704512 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 704512 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 704512 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 696320 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 696320 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 688128 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 688128 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 688128 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 679936 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 679936 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 671744 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 671744 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 663552 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 663552 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 663552 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 655360 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 655360 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 655360 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 647168 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 647168 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 638976 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 638976 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 630784 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 630784 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 630784 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 622592 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 622592 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 614400 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 614400 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 606208 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 606208 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 598016 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 598016 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 598016 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 589824 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 581632 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 581632 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 573440 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 573440 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 565248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 565248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 565248 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 557056 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 557056 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 548864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 548864 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 540672 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 532480 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 532480 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 524288 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 524288 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 516096 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 516096 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 516096 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 499712 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 499712 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 491520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 491520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 491520 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 483328 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 483328 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 475136 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 475136 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 475136 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 458752 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 458752 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 450560 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 450560 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 450560 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 442368 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 442368 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 434176 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 434176 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 425984 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 425984 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 425984 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 417792 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 417792 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 417792 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 401408 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 401408 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 393216 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 393216 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 393216 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 385024 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 385024 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 376832 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 376832 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 368640 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 368640 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 368640 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 360448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 360448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 360448 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 352256 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 344064 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 335872 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 335872 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 327680 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 327680 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 327680 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 319488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 319488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 319488 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 311296 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 311296 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 303104 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 303104 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 303104 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 294912 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 294912 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 286720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 286720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 286720 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 278528 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 278528 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 270336 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 270336 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 262144 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 262144 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 253952 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 245760 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 245760 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 237568 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 229376 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 229376 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 221184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 221184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 221184 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 212992 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 212992 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 204800 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 196608 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 196608 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 188416 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 188416 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 180224 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 180224 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 180224 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 172032 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 172032 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 163840 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 163840 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 155648 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 155648 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 155648 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 147456 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 147456 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 147456 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 139264 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 139264 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 131072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 131072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 131072 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 122880 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 122880 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 114688 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 114688 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 114688 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 106496 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 106496 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 106496 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 98304 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 98304 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 90112 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 90112 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 90112 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 81920 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 73728 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 65536 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 65536 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 57344 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 57344 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 57344 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 49152 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 49152 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 49152 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7882 writes, 32K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7882 writes, 1442 syncs, 5.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7882 writes, 32K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 20.43 MB, 0.03 MB/s#012Interval WAL: 7882 writes, 1442 syncs, 5.47 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1024000 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1024000 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1015808 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1015808 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1007616 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1007616 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1007616 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 999424 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 999424 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 991232 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 991232 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 983040 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 983040 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 983040 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 974848 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 974848 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 974848 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 966656 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 958464 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 950272 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 950272 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 950272 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 942080 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 942080 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 933888 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 933888 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 933888 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 925696 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 925696 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 925696 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 917504 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 917504 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 909312 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 909312 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 901120 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 901120 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 892928 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 884736 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 884736 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 868352 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 868352 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 868352 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 860160 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 860160 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 860160 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 851968 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 843776 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 835584 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 835584 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 835584 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 827392 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 827392 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 819200 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 819200 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 819200 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 811008 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 811008 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 802816 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 802816 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 802816 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 794624 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 794624 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 786432 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 786432 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 786432 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 778240 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 778240 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 770048 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 770048 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 761856 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 761856 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 761856 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 753664 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 753664 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 745472 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 745472 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 745472 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 737280 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 737280 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 729088 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 729088 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 712704 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 712704 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 712704 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 704512 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 704512 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 704512 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 696320 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 696320 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 688128 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 688128 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 688128 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 679936 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 679936 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 671744 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 671744 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 663552 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 663552 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 663552 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 655360 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 655360 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 655360 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 647168 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 647168 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 638976 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 638976 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 638976 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 630784 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 622592 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 614400 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 614400 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 606208 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 606208 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 606208 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 598016 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 598016 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 589824 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 589824 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 589824 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 581632 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 573440 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 393.620117188s of 393.639526367s, submitted: 6
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 548864 heap: 81731584 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bcabe000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,1])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847883 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1384448 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1220608 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1220608 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1220608 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1212416 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1204224 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1196032 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1187840 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1179648 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1171456 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1163264 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: mgrc ms_handle_reset ms_handle_reset con 0x55be658ffc00
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4113492602
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4113492602,v1:192.168.122.100:6801/4113492602]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: mgrc handle_mgr_configure stats_period=5
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 ms_handle_reset con 0x55be66507000 session 0x55be6723c1e0
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 901120 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 884736 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 876544 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 868352 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 860160 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8544 writes, 33K keys, 8544 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8544 writes, 1756 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 662 writes, 1039 keys, 662 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 662 writes, 314 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55be64ab8f30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 851968 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 843776 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 835584 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 827392 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 819200 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 811008 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 811008 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 811008 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 811008 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 802816 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.018554688s of 600.198364258s, submitted: 348
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 942080 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,1])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 909312 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 892928 heap: 82780160 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 1925120 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 heartbeat osd_stat(store_statfs(0x1bc6ae000/0x0/0x1bfc00000, data 0xb727b/0x160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847811 data_alloc: 218103808 data_used: 237568
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1802240 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.811480522s of 17.501916885s, submitted: 294
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 1794048 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 119 heartbeat osd_stat(store_statfs(0x1bc6aa000/0x0/0x1bfc00000, data 0xb8ff6/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851985 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 1777664 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 1777664 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 120 heartbeat osd_stat(store_statfs(0x1bc6a6000/0x0/0x1bfc00000, data 0xbadc5/0x166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a6000/0x0/0x1bfc00000, data 0xbadc5/0x166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1744896 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: bluestore.MempoolThread(0x55be64b97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857933 data_alloc: 218103808 data_used: 245760
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1753088 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'config diff' '{prefix=config diff}'
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'config show' '{prefix=config show}'
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 770048 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1212416 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: osd.0 121 heartbeat osd_stat(store_statfs(0x1bc6a4000/0x0/0x1bfc00000, data 0xbca26/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 1015808 heap: 83828736 old mem: 2845415832 new mem: 2845415832
Jan 31 02:16:52 np0005603541 ceph-osd[84743]: do_command 'log dump' '{prefix=log dump}'
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135518504' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24950 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:52 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:52 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:52 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15156 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 02:16:52 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811136538' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24962 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:52 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24988 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24977 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:16:53 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116236357' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.24995 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:53 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:53 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:53 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:53.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:53 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:53 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15192 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:53 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25018 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 02:16:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396256351' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25013 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15210 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 02:16:54 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366351136' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 02:16:54 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:54 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:54 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15222 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25045 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25046 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:54 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:54.902+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:54 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:55 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15240 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:55 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25057 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140399047' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 02:16:55 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25072 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:55 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15273 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:55 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:55.760+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:55 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:55 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:55 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:55 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:55.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 02:16:55 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700791227' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25087 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625497177' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772204015' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:56 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25102 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:56 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:56 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:56 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850840323' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 02:16:56 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040105218' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 02:16:56 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807619893' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567898811' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 02:16:57 np0005603541 systemd[1]: Starting Hostname Service...
Jan 31 02:16:57 np0005603541 systemd[1]: Started Hostname Service.
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3551956014' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213005337' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:57 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:57 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:57 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:57.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:57 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:57 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25135 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:57 np0005603541 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-0-gghdjs[74644]: 2026-01-31T07:16:57.958+0000 7f6ece6f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:57 np0005603541 ceph-mgr[74648]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232109545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232217152' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918116384' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400978574' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 02:16:58 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:58 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:58 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:58 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25187 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 6105 writes, 28K keys, 6102 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 6105 writes, 6102 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1895 writes, 9153 keys, 1894 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s#012Interval WAL: 1896 writes, 1895 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.6      0.35              0.08        16    0.022       0      0       0.0       0.0#012  L6      1/0    7.28 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    142.9    120.0      1.04              0.31        15    0.069     86K   7992       0.0       0.0#012 Sum      1/0    7.28 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.0    106.8    112.0      1.39              0.39        31    0.045     86K   7992       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.3    130.7    131.8      0.51              0.18        14    0.037     45K   3612       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.9    120.0      1.04              0.31        15    0.069     86K   7992       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     89.0      0.35              0.08        15    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.15 GB write, 0.09 MB/s write, 0.14 GB read, 0.08 MB/s read, 1.4 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561559fff1f0#2 capacity: 308.00 MB usage: 14.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000171 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(834,13.69 MB,4.44601%) FilterBlock(32,249.36 KB,0.0790633%) IndexBlock(32,399.28 KB,0.126598%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:16:58 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25196 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: Health check update: 2 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577577129' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 02:16:58 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2347524087' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25202 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:58 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25211 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15405 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15393 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25223 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15414 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15420 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:16:59 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:16:59 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:59 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:59 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25238 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15429 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:17:00 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:17:00 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:00 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15438 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25268 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 02:17:00 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773264957' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25243 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:00 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:17:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854097873' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15465 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667654753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667654753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/327818538' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25267 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 02:17:01 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:17:01 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:01 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:01 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.15483 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 02:17:01 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 02:17:02 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25288 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839043288' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 02:17:02 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25318 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:02 np0005603541 radosgw[93037]: ====== starting new request req=0x7f386097d6f0 =====
Jan 31 02:17:02 np0005603541 radosgw[93037]: ====== req done req=0x7f386097d6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:02 np0005603541 radosgw[93037]: beast: 0x7f386097d6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 02:17:02 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:02 np0005603541 ceph-mgr[74648]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:17:03 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25345 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:03 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25400 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:17:03 np0005603541 ceph-mgr[74648]: log_channel(audit) log [DBG] : from='client.25366 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 02:17:03 np0005603541 ceph-mon[74355]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 02:17:03 np0005603541 ceph-mon[74355]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404632863' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
